| .. |
|
audio_encoders
|
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845)
|
2026-02-01 01:01:11 -05:00 |
|
cldm
|
Add better error message for common error. (#10846)
|
2025-11-23 04:55:22 -05:00 |
|
comfy_types
|
feat: add gradient-slider display mode for FLOAT inputs (#12536)
|
2026-02-20 22:52:32 -08:00 |
|
extra_samplers
|
|
|
|
image_encoders
|
Add Hunyuan 3D 2.1 Support (#8714)
|
2025-09-04 20:36:20 -04:00 |
|
k_diffusion
|
ace15: Use dynamic_vram friendly trange (#12409)
|
2026-02-11 14:53:42 -05:00 |
|
ldm
|
feat: Support SDPose-OOD (#12661)
|
2026-02-26 19:59:05 -05:00 |
|
sd1_tokenizer
|
Silence clip tokenizer warning. (#8934)
|
2025-07-16 14:42:07 -04:00 |
|
t2i_adapter
|
|
|
|
taesd
|
Support LTX2 tiny vae (taeltx_2) (#11929)
|
2026-01-21 23:03:51 -05:00 |
|
text_encoders
|
Fix ltxav te mem estimation. (#12643)
|
2026-02-25 23:13:47 -05:00 |
|
weight_adapter
|
MPDynamic: force load flux img_in weight (Fixes flux1 canny+depth lora crash) (#12446)
|
2026-02-15 20:30:09 -05:00 |
|
cli_args.py
|
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845)
|
2026-02-01 01:01:11 -05:00 |
|
clip_config_bigg.json
|
|
|
|
clip_model.py
|
Support the siglip 2 naflex model as a clip vision model. (#11831)
|
2026-01-12 17:05:54 -05:00 |
|
clip_vision_config_g.json
|
|
|
|
clip_vision_config_h.json
|
|
|
|
clip_vision_config_vitl_336_llava.json
|
Support llava clip vision model.
|
2025-03-06 00:24:43 -05:00 |
|
clip_vision_config_vitl_336.json
|
|
|
|
clip_vision_config_vitl.json
|
|
|
|
clip_vision_siglip2_base_naflex.json
|
Support the siglip 2 naflex model as a clip vision model. (#11831)
|
2026-01-12 17:05:54 -05:00 |
|
clip_vision_siglip_384.json
|
|
|
|
clip_vision_siglip_512.json
|
Support 512 siglip model.
|
2025-04-05 07:01:01 -04:00 |
|
clip_vision.py
|
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845)
|
2026-02-01 01:01:11 -05:00 |
|
conds.py
|
Cleanups to the last PR. (#12646)
|
2026-02-26 01:30:31 -05:00 |
|
context_windows.py
|
Add handling for vace_context in context windows (#11386)
|
2025-12-30 14:40:42 -08:00 |
|
controlnet.py
|
Add working Qwen 2512 ControlNet (Fun ControlNet) support (#12359)
|
2026-02-13 22:23:52 -05:00 |
|
diffusers_convert.py
|
|
|
|
diffusers_load.py
|
|
|
|
float.py
|
Optimize nvfp4 lora applying. (#11866)
|
2026-01-14 00:49:38 -05:00 |
|
gligen.py
|
Remove some useless code. (#8812)
|
2025-07-06 07:07:39 -04:00 |
|
hooks.py
|
New Year ruff cleanup. (#11595)
|
2026-01-01 22:06:14 -05:00 |
|
latent_formats.py
|
Basic support for the ace step 1.5 model. (#12237)
|
2026-02-03 00:06:18 -05:00 |
|
lora_convert.py
|
Use torch RMSNorm for flux models and refactor hunyuan video code. (#12432)
|
2026-02-13 15:35:13 -05:00 |
|
lora.py
|
feat(ace-step): add ACE-Step 1.5 lycoris key alias mapping for LoKR #12638 (#12665)
|
2026-02-26 18:19:19 -05:00 |
|
memory_management.py
|
comfy-aimdo 0.2 - Improved pytorch allocator integration (#12557)
|
2026-02-21 10:52:57 -08:00 |
|
model_base.py
|
Cleanups to the last PR. (#12646)
|
2026-02-26 01:30:31 -05:00 |
|
model_detection.py
|
feat: Support SDPose-OOD (#12661)
|
2026-02-26 19:59:05 -05:00 |
|
model_management.py
|
comfy-aimdo 0.2 - Improved pytorch allocator integration (#12557)
|
2026-02-21 10:52:57 -08:00 |
|
model_patcher.py
|
Disable dynamic_vram when using torch compiler (#12612)
|
2026-02-24 19:13:46 -05:00 |
|
model_sampling.py
|
initial FlowRVS support (#12637)
|
2026-02-25 23:38:46 -05:00 |
|
nested_tensor.py
|
WIP way to support multi multi dimensional latents. (#10456)
|
2025-10-23 21:21:14 -04:00 |
|
ops.py
|
Fix Aimdo fallback on probe to not use zero-copy sft (#12634)
|
2026-02-25 16:49:48 -05:00 |
|
options.py
|
|
|
|
patcher_extension.py
|
Fix order of inputs nested merge_nested_dicts (#10362)
|
2025-10-15 16:47:26 -07:00 |
|
pinned_memory.py
|
fix pinning with model defined dtype (#12208)
|
2026-02-01 08:42:32 -08:00 |
|
pixel_space_convert.py
|
Changes to the previous radiance commit. (#9851)
|
2025-09-13 18:03:34 -04:00 |
|
quant_ops.py
|
Optimize nvfp4 lora applying. (#11866)
|
2026-01-14 00:49:38 -05:00 |
|
rmsnorm.py
|
Remove code to support RMSNorm on old pytorch. (#12499)
|
2026-02-16 20:09:24 -05:00 |
|
sample.py
|
Make regular empty latent node work properly on flux 2 variants. (#12050)
|
2026-01-23 19:50:48 -05:00 |
|
sampler_helpers.py
|
[Trainer] training with proper offloading (#12189)
|
2026-02-10 21:45:19 -05:00 |
|
samplers.py
|
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845)
|
2026-02-01 01:01:11 -05:00 |
|
sd1_clip_config.json
|
|
|
|
sd1_clip.py
|
Force min length 1 when tokenizing for text generation. (#12538)
|
2026-02-19 22:57:44 -05:00 |
|
sd.py
|
initial FlowRVS support (#12637)
|
2026-02-25 23:38:46 -05:00 |
|
sdxl_clip.py
|
Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803)
|
2025-04-25 19:36:00 -04:00 |
|
supported_models_base.py
|
Fix some custom nodes. (#11134)
|
2025-12-05 18:25:31 -05:00 |
|
supported_models.py
|
feat: Support SDPose-OOD (#12661)
|
2026-02-26 19:59:05 -05:00 |
|
utils.py
|
Fix Aimdo fallback on probe to not use zero-copy sft (#12634)
|
2026-02-25 16:49:48 -05:00 |
|
windows.py
|
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845)
|
2026-02-01 01:01:11 -05:00 |