ComfyUI/comfy
rattus d30c609f5a
utils: safetensors: dont slice data on torch level (#12266)
Torch has alignment enforcement when viewing with data type changes
but only relative to itself. Do all tensor constructions straight
off the memory-view individually so pytorch doesnt see an alignment
problem.

The is needed for handling misaligned safetensors weights, which are
reasonably common in third party models.

This limits usage of this safetensors loader to GPU compute only
as CPUs kernnel are very likely to bus error. But it works for
dynamic_vram, where we really dont want to take a deep copy and we
always use GPU copy_ which disentangles the misalignment.
2026-02-04 01:48:47 -05:00
..
audio_encoders Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
cldm Add better error message for common error. (#10846) 2025-11-23 04:55:22 -05:00
comfy_types Add support for dev-only nodes. (#12106) 2026-01-27 13:03:29 -08:00
extra_samplers
image_encoders Add Hunyuan 3D 2.1 Support (#8714) 2025-09-04 20:36:20 -04:00
k_diffusion Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
ldm Basic support for the ace step 1.5 model. (#12237) 2026-02-03 00:06:18 -05:00
sd1_tokenizer Silence clip tokenizer warning. (#8934) 2025-07-16 14:42:07 -04:00
t2i_adapter
taesd Support LTX2 tiny vae (taeltx_2) (#11929) 2026-01-21 23:03:51 -05:00
text_encoders Fix crash with ace step 1.5 (#12264) 2026-02-04 00:03:21 -05:00
weight_adapter [Weight-adapter/Trainer] Bypass forward mode in Weight adapter system (#11958) 2026-01-24 22:56:22 -05:00
checkpoint_pickle.py
cli_args.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
clip_config_bigg.json
clip_model.py Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_config_g.json
clip_vision_config_h.json
clip_vision_config_vitl_336_llava.json
clip_vision_config_vitl_336.json
clip_vision_config_vitl.json
clip_vision_siglip2_base_naflex.json Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_siglip_384.json
clip_vision_siglip_512.json Support 512 siglip model. 2025-04-05 07:01:01 -04:00
clip_vision.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
conds.py Add some warnings and prevent crash when cond devices don't match. (#9169) 2025-08-04 04:20:12 -04:00
context_windows.py Add handling for vace_context in context windows (#11386) 2025-12-30 14:40:42 -08:00
controlnet.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
diffusers_convert.py
diffusers_load.py
float.py Optimize nvfp4 lora applying. (#11866) 2026-01-14 00:49:38 -05:00
gligen.py Remove some useless code. (#8812) 2025-07-06 07:07:39 -04:00
hooks.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00
latent_formats.py Basic support for the ace step 1.5 model. (#12237) 2026-02-03 00:06:18 -05:00
lora_convert.py Implement the USO subject identity lora. (#9674) 2025-09-01 18:54:02 -04:00
lora.py Support ace step 1.5 base model loras. (#12252) 2026-02-03 13:54:23 -05:00
memory_management.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
model_base.py Basic support for the ace step 1.5 model. (#12237) 2026-02-03 00:06:18 -05:00
model_detection.py Basic support for the ace step 1.5 model. (#12237) 2026-02-03 00:06:18 -05:00
model_management.py mm: Remove Aimdo exemption for empty_cache (#12260) 2026-02-03 21:39:19 -05:00
model_patcher.py Dynamic VRAM unloading fix (#12227) 2026-02-02 17:35:20 -05:00
model_sampling.py Refactor model sampling sigmas code. (#10250) 2025-10-08 17:49:02 -04:00
nested_tensor.py WIP way to support multi multi dimensional latents. (#10456) 2025-10-23 21:21:14 -04:00
ops.py fix pinning with model defined dtype (#12208) 2026-02-01 08:42:32 -08:00
options.py
patcher_extension.py Fix order of inputs nested merge_nested_dicts (#10362) 2025-10-15 16:47:26 -07:00
pinned_memory.py fix pinning with model defined dtype (#12208) 2026-02-01 08:42:32 -08:00
pixel_space_convert.py Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
quant_ops.py Optimize nvfp4 lora applying. (#11866) 2026-01-14 00:49:38 -05:00
rmsnorm.py Add warning when using old pytorch. (#9347) 2025-08-15 00:22:26 -04:00
sample.py Make regular empty latent node work properly on flux 2 variants. (#12050) 2026-01-23 19:50:48 -05:00
sampler_helpers.py skip_load_model -> force_full_load (#11390) 2025-12-17 23:29:32 -05:00
samplers.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
sd1_clip_config.json
sd1_clip.py Basic support for the ace step 1.5 model. (#12237) 2026-02-03 00:06:18 -05:00
sd.py Support the 4B ace step 1.5 lm model. (#12257) 2026-02-03 19:01:38 -05:00
sdxl_clip.py Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
supported_models_base.py Fix some custom nodes. (#11134) 2025-12-05 18:25:31 -05:00
supported_models.py Support the 4B ace step 1.5 lm model. (#12257) 2026-02-03 19:01:38 -05:00
utils.py utils: safetensors: dont slice data on torch level (#12266) 2026-02-04 01:48:47 -05:00
windows.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00