ComfyUI/comfy
Rattus 243fb596f9 Reduce RAM and compute time in model saving with Loras
Get the model saving logic away from force_patch_weights and instead do
the patching JIT during safetensors saving.

Firstly switch off force_patch_weights in the load for save which avoids
creating CPU side tensors with loras calculated.

Then at save time, wrap the tensor to catch safetensors call to .to() and
patch it live.

This avoids having to ever have a lora-calculated copy of offloaded
weights on the CPU.

Also take advantage of the presence of the GPU when doing this Lora
calculation. The former force_patch_weights would just do eveyrthing on
the CPU. Its generally faster to go the GPU and back even if its just
a Lora application.
2026-01-21 14:32:12 +10:00
..
audio_encoders Support the HuMo model. (#9903) 2025-09-17 00:12:48 -04:00
cldm Add better error message for common error. (#10846) 2025-11-23 04:55:22 -05:00
comfy_types LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
extra_samplers Uni pc sampler now works with audio and video models. 2025-01-18 05:27:58 -05:00
image_encoders Add Hunyuan 3D 2.1 Support (#8714) 2025-09-04 20:36:20 -04:00
k_diffusion Fix noise with ancestral samplers when inferencing on cpu. (#11528) 2025-12-26 22:03:01 -05:00
ldm fix: remove normalization of audio in LTX Mel spectrogram creation (#11990) 2026-01-20 18:44:28 -05:00
sd1_tokenizer Silence clip tokenizer warning. (#8934) 2025-07-16 14:42:07 -04:00
t2i_adapter
taesd New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00
text_encoders Config for Qwen 3 0.6B model. (#11998) 2026-01-20 23:08:31 -05:00
weight_adapter Fix loras not working on mixed fp8. (#10899) 2025-11-26 00:07:58 -05:00
checkpoint_pickle.py
cli_args.py Add most basic Asset support for models (#11315) 2026-01-08 22:21:51 -05:00
clip_config_bigg.json
clip_model.py Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_config_g.json
clip_vision_config_h.json
clip_vision_config_vitl_336_llava.json Support llava clip vision model. 2025-03-06 00:24:43 -05:00
clip_vision_config_vitl_336.json
clip_vision_config_vitl.json
clip_vision_siglip2_base_naflex.json Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_siglip_384.json Support new flux model variants. 2024-11-21 08:38:23 -05:00
clip_vision_siglip_512.json Support 512 siglip model. 2025-04-05 07:01:01 -04:00
clip_vision.py Add image sizes to clip vision outputs. (#11923) 2026-01-16 23:02:28 -05:00
conds.py Add some warnings and prevent crash when cond devices don't match. (#9169) 2025-08-04 04:20:12 -04:00
context_windows.py Add handling for vace_context in context windows (#11386) 2025-12-30 14:40:42 -08:00
controlnet.py Fix Race condition in --async-offload that can cause corruption (#10501) 2025-10-29 17:17:46 -04:00
diffusers_convert.py Remove useless code. 2025-01-24 06:15:54 -05:00
diffusers_load.py load_unet -> load_diffusion_model with a model_options argument. 2024-08-12 23:20:57 -04:00
float.py Optimize nvfp4 lora applying. (#11866) 2026-01-14 00:49:38 -05:00
gligen.py Remove some useless code. (#8812) 2025-07-06 07:07:39 -04:00
hooks.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00
latent_formats.py Disable ltxav previews. (#11676) 2026-01-06 17:41:27 -05:00
lora_convert.py Implement the USO subject identity lora. (#9674) 2025-09-01 18:54:02 -04:00
lora.py Support ModelScope-Trainer DiffSynth lora for Z Image. (#11805) 2026-01-12 15:38:46 -05:00
model_base.py Reduce RAM and compute time in model saving with Loras 2026-01-21 14:32:12 +10:00
model_detection.py Dynamically detect chroma radiance patch size (#11991) 2026-01-20 18:46:11 -05:00
model_management.py pythorch_attn_by_def_on_gfx1200 (#11793) 2026-01-10 16:51:05 -05:00
model_patcher.py Reduce RAM and compute time in model saving with Loras 2026-01-21 14:32:12 +10:00
model_sampling.py Refactor model sampling sigmas code. (#10250) 2025-10-08 17:49:02 -04:00
nested_tensor.py WIP way to support multi multi dimensional latents. (#10456) 2025-10-23 21:21:14 -04:00
ops.py Make loras work on nvfp4 models. (#11837) 2026-01-12 22:33:54 -05:00
options.py
patcher_extension.py Fix order of inputs nested merge_nested_dicts (#10362) 2025-10-15 16:47:26 -07:00
pixel_space_convert.py Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
quant_ops.py Optimize nvfp4 lora applying. (#11866) 2026-01-14 00:49:38 -05:00
rmsnorm.py Add warning when using old pytorch. (#9347) 2025-08-15 00:22:26 -04:00
sample.py Fix mistake. (#10484) 2025-10-25 23:07:29 -04:00
sampler_helpers.py skip_load_model -> force_full_load (#11390) 2025-12-17 23:29:32 -05:00
samplers.py Support nested tensor denoise masks. (#11431) 2025-12-19 19:59:25 -05:00
sd1_clip_config.json
sd1_clip.py Disable prompt weights on newbie te. (#11434) 2025-12-20 00:19:47 -05:00
sd.py Reduce RAM and compute time in model saving with Loras 2026-01-21 14:32:12 +10:00
sdxl_clip.py Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
supported_models_base.py Fix some custom nodes. (#11134) 2025-12-05 18:25:31 -05:00
supported_models.py Adjust memory usage factor calculation for flux2 klein. (#11900) 2026-01-15 20:06:40 -05:00
utils.py Add LyCoris LoKr MLP layer support for Flux2 (#11997) 2026-01-20 23:18:33 -05:00