ComfyUI/comfy
octo-patch 7c1ec4a59e Fix weight_norm parametrizations causing incorrect partial load skip in _load_list
When a module uses torch.nn.utils.parametrizations.weight_norm, its weight
parameter is moved into a 'parametrizations' child sub-module as original0/original1.
named_parameters(recurse=True) yields 'parametrizations.weight.original0' while
named_parameters(recurse=False) does not, causing _load_list() to incorrectly
classify the module as having random weights in non-leaf sub-modules and skipping
it during partial VRAM loading.

The fix skips 'parametrizations.*' entries in the check, so weight-normed modules
are correctly included in the load list and their parameters moved to GPU as needed.

The AudioOobleckVAE (Stable Audio) had a disable_offload=True workaround that
forced full GPU load to avoid this bug. With the root cause fixed, that workaround
is no longer necessary and is removed, allowing partial VRAM offloading on systems
with limited GPU memory.

Fixes #11855
2026-04-17 12:22:17 +08:00
..
audio_encoders Fix fp16 audio encoder models (#12811) 2026-03-06 18:20:07 -05:00
cldm Add better error message for common error. (#10846) 2025-11-23 04:55:22 -05:00
comfy_types fix: use frontend-compatible format for Float gradient_stops (#12789) 2026-03-12 10:14:28 -07:00
extra_samplers
image_encoders
k_diffusion ace15: Use dynamic_vram friendly trange (#12409) 2026-02-11 14:53:42 -05:00
ldm Fix ernie on devices that don't support fp64. (#13414) 2026-04-14 22:54:47 -04:00
sd1_tokenizer
t2i_adapter
taesd Support LTX2 tiny vae (taeltx_2) (#11929) 2026-01-21 23:03:51 -05:00
text_encoders Use ErnieTEModel_ not ErnieTEModel. (#13431) 2026-04-16 10:11:58 -04:00
weight_adapter MPDynamic: force load flux img_in weight (Fixes flux1 canny+depth lora crash) (#12446) 2026-02-15 20:30:09 -05:00
cli_args.py Integrate RAM cache with model RAM management (#13173) 2026-03-27 21:34:16 -04:00
clip_config_bigg.json
clip_model.py Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_config_g.json
clip_vision_config_h.json
clip_vision_config_vitl_336_llava.json
clip_vision_config_vitl_336.json
clip_vision_config_vitl.json
clip_vision_siglip2_base_naflex.json Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_siglip_384.json
clip_vision_siglip_512.json
clip_vision.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
conds.py Cleanups to the last PR. (#12646) 2026-02-26 01:30:31 -05:00
context_windows.py Add slice_cond and per-model context window cond resizing (#12645) 2026-03-19 20:42:42 -07:00
controlnet.py Add working Qwen 2512 ControlNet (Fun ControlNet) support (#12359) 2026-02-13 22:23:52 -05:00
diffusers_convert.py
diffusers_load.py
float.py feat: Support mxfp8 (#12907) 2026-03-14 18:36:29 -04:00
gligen.py
hooks.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00
latent_formats.py Feat: z-image pixel space (model still training atm) (#12709) 2026-03-02 19:43:47 -05:00
lora_convert.py Use torch RMSNorm for flux models and refactor hunyuan video code. (#12432) 2026-02-13 15:35:13 -05:00
lora.py Fix text encoder lora loading for wrapped models (#12852) 2026-03-09 16:08:51 -04:00
memory_management.py Integrate RAM cache with model RAM management (#13173) 2026-03-27 21:34:16 -04:00
model_base.py Implement Ernie Image model. (#13369) 2026-04-11 22:29:31 -04:00
model_detection.py Implement Ernie Image model. (#13369) 2026-04-11 22:29:31 -04:00
model_management.py Add a supports_fp64 function. (#13368) 2026-04-11 21:06:36 -04:00
model_patcher.py Fix weight_norm parametrizations causing incorrect partial load skip in _load_list 2026-04-17 12:22:17 +08:00
model_sampling.py initial FlowRVS support (#12637) 2026-02-25 23:38:46 -05:00
nested_tensor.py WIP way to support multi multi dimensional latents. (#10456) 2025-10-23 21:21:14 -04:00
ops.py Fix OOM regression in _apply() for quantized models during inference (#13372) 2026-04-15 02:10:36 -07:00
options.py
patcher_extension.py Fix order of inputs nested merge_nested_dicts (#10362) 2025-10-15 16:47:26 -07:00
pinned_memory.py Integrate RAM cache with model RAM management (#13173) 2026-03-27 21:34:16 -04:00
pixel_space_convert.py
quant_ops.py feat: Support mxfp8 (#12907) 2026-03-14 18:36:29 -04:00
rmsnorm.py Remove code to support RMSNorm on old pytorch. (#12499) 2026-02-16 20:09:24 -05:00
sample.py Fix fp16 intermediates giving different results. (#13100) 2026-03-21 17:53:25 -04:00
sampler_helpers.py Disable dynamic_vram when weight hooks applied (#12653) 2026-02-28 16:50:18 -05:00
samplers.py Fix sampling issue with fp16 intermediates. (#13099) 2026-03-21 17:47:42 -04:00
sd1_clip_config.json
sd1_clip.py feat: Support Qwen3.5 text generation models (#12771) 2026-03-25 22:48:28 -04:00
sd.py Fix weight_norm parametrizations causing incorrect partial load skip in _load_list 2026-04-17 12:22:17 +08:00
sdxl_clip.py
supported_models_base.py Fix some custom nodes. (#11134) 2025-12-05 18:25:31 -05:00
supported_models.py Implement Ernie Image model. (#13369) 2026-04-11 22:29:31 -04:00
utils.py Reduce tiled decode peak memory (#13050) 2026-03-19 13:29:34 -04:00
windows.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00