ComfyUI/comfy
Rattus 64c2541b05 execution: add aimdo primary pytorch cache integration
We need to general pytorch cache defragmentation on an appropriate level for
aimdo. Do in here on the per node basis, which has a reasonable chance of
purging stale shapes out of the pytorch caching allocator and saving VRAM
without costing too much garbage collector thrash.

This looks like a lot of GC but because aimdo never fails from pytorch and
saves the pytorch allocator from ever need to defrag out of demand, but it
needs a oil change every now and then so we gotta do it. Doing it here also
means the pytorch temps are cleared from task manager VRAM usage so user
anxiety can go down a little when they see their vram drop back at the end
of workflows inline with inference usage (rather than assuming full VRAM
leaks).
2026-01-13 19:58:06 +10:00
..
audio_encoders models: Use CoreModelPatcher 2026-01-13 19:58:06 +10:00
cldm Add better error message for common error. (#10846) 2025-11-23 04:55:22 -05:00
comfy_types LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
extra_samplers Uni pc sampler now works with audio and video models. 2025-01-18 05:27:58 -05:00
image_encoders Add Hunyuan 3D 2.1 Support (#8714) 2025-09-04 20:36:20 -04:00
k_diffusion Fix noise with ancestral samplers when inferencing on cpu. (#11528) 2025-12-26 22:03:01 -05:00
ldm models: Use CoreModelPatcher 2026-01-13 19:58:06 +10:00
sd1_tokenizer Silence clip tokenizer warning. (#8934) 2025-07-16 14:42:07 -04:00
t2i_adapter Controlnet refactor. 2024-06-27 18:43:11 -04:00
taesd New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00
text_encoders Fix chroma fp8 te being treated as fp16. (#11795) 2026-01-10 14:40:42 -08:00
weight_adapter Fix loras not working on mixed fp8. (#10899) 2025-11-26 00:07:58 -05:00
checkpoint_pickle.py Remove pytorch_lightning dependency. 2023-06-13 10:11:33 -04:00
cli_args.py Add most basic Asset support for models (#11315) 2026-01-08 22:21:51 -05:00
clip_config_bigg.json Fix potential issue with non clip text embeddings. 2024-07-30 14:41:13 -04:00
clip_model.py Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_config_g.json Add support for clip g vision model to CLIPVisionLoader. 2023-08-18 11:13:29 -04:00
clip_vision_config_h.json Add support for unCLIP SD2.x models. 2023-04-01 23:19:15 -04:00
clip_vision_config_vitl_336_llava.json Support llava clip vision model. 2025-03-06 00:24:43 -05:00
clip_vision_config_vitl_336.json support clip-vit-large-patch14-336 (#4042) 2024-07-17 13:12:50 -04:00
clip_vision_config_vitl.json Add support for unCLIP SD2.x models. 2023-04-01 23:19:15 -04:00
clip_vision_siglip2_base_naflex.json Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_siglip_384.json Support new flux model variants. 2024-11-21 08:38:23 -05:00
clip_vision_siglip_512.json Support 512 siglip model. 2025-04-05 07:01:01 -04:00
clip_vision.py models: Use CoreModelPatcher 2026-01-13 19:58:06 +10:00
conds.py Add some warnings and prevent crash when cond devices don't match. (#9169) 2025-08-04 04:20:12 -04:00
context_windows.py Add handling for vace_context in context windows (#11386) 2025-12-30 14:40:42 -08:00
controlnet.py models: Use CoreModelPatcher 2026-01-13 19:58:06 +10:00
diffusers_convert.py Remove useless code. 2025-01-24 06:15:54 -05:00
diffusers_load.py load_unet -> load_diffusion_model with a model_options argument. 2024-08-12 23:20:57 -04:00
float.py Refactor to try to lower mem usage. (#11840) 2026-01-12 21:01:52 -08:00
gligen.py Remove some useless code. (#8812) 2025-07-06 07:07:39 -04:00
hooks.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00
latent_formats.py Disable ltxav previews. (#11676) 2026-01-06 17:41:27 -05:00
lora_convert.py Implement the USO subject identity lora. (#9674) 2025-09-01 18:54:02 -04:00
lora.py Support ModelScope-Trainer DiffSynth lora for Z Image. (#11805) 2026-01-12 15:38:46 -05:00
memory_management.py execution: add aimdo primary pytorch cache integration 2026-01-13 19:58:06 +10:00
model_base.py models: Use CoreModelPatcher 2026-01-13 19:58:06 +10:00
model_detection.py Fix issue with t5 text encoder in fp4. (#11794) 2026-01-10 17:31:31 -05:00
model_management.py ops/mp: implement aimdo 2026-01-13 19:58:06 +10:00
model_patcher.py ops/mp: implement aimdo 2026-01-13 19:58:06 +10:00
model_sampling.py Refactor model sampling sigmas code. (#10250) 2025-10-08 17:49:02 -04:00
nested_tensor.py WIP way to support multi multi dimensional latents. (#10456) 2025-10-23 21:21:14 -04:00
ops.py ops/mp: implement aimdo 2026-01-13 19:58:06 +10:00
options.py Only parse command line args when main.py is called. 2023-09-13 11:38:20 -04:00
patcher_extension.py Fix order of inputs nested merge_nested_dicts (#10362) 2025-10-15 16:47:26 -07:00
pinned_memory.py pinned_memory: add python 2026-01-13 19:55:35 +10:00
pixel_space_convert.py Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
quant_ops.py Make loras work on nvfp4 models. (#11837) 2026-01-12 22:33:54 -05:00
rmsnorm.py Add warning when using old pytorch. (#9347) 2025-08-15 00:22:26 -04:00
sample.py Fix mistake. (#10484) 2025-10-25 23:07:29 -04:00
sampler_helpers.py skip_load_model -> force_full_load (#11390) 2025-12-17 23:29:32 -05:00
samplers.py mp: wrap get_free_memory 2026-01-13 19:55:35 +10:00
sd1_clip_config.json Fix potential issue with non clip text embeddings. 2024-07-30 14:41:13 -04:00
sd1_clip.py Disable prompt weights on newbie te. (#11434) 2025-12-20 00:19:47 -05:00
sd.py models: Use CoreModelPatcher 2026-01-13 19:58:06 +10:00
sdxl_clip.py Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
supported_models_base.py Fix some custom nodes. (#11134) 2025-12-05 18:25:31 -05:00
supported_models.py Bump ltxav mem estimation a bit. (#11842) 2026-01-13 01:42:07 -05:00
utils.py move string_to_seed to utils.py 2026-01-13 19:55:35 +10:00