ComfyUI/comfy
Jukka Seppänen 16b9aabd52
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Support Multi/InfiniteTalk (#10179)
* re-init

* Update model_multitalk.py

* whitespace...

* Update model_multitalk.py

* remove print

* this is redundant

* remove import

* Restore preview functionality

* Move block_idx to transformer_options

* Remove LoopingSamplerCustomAdvanced

* Remove looping functionality, keep extension functionality

* Update model_multitalk.py

* Handle ref_attn_mask with separate patch to avoid having to always return q and k from self_attn

* Chunk attention map calculation for multiple speakers to reduce peak VRAM usage

* Update model_multitalk.py

* Add ModelPatch type back

* Fix for latest upstream

* Use DynamicCombo for cleaner node

Basically just so that single_speaker mode hides mask inputs and 2nd audio input

* Update nodes_wan.py
2026-01-21 23:09:48 -05:00
..
audio_encoders
cldm
comfy_types
extra_samplers
image_encoders
k_diffusion
ldm Support Multi/InfiniteTalk (#10179) 2026-01-21 23:09:48 -05:00
sd1_tokenizer
t2i_adapter
taesd Support LTX2 tiny vae (taeltx_2) (#11929) 2026-01-21 23:03:51 -05:00
text_encoders More targeted embedding_connector loading for LTX2 text encoder (#11992) 2026-01-21 23:05:06 -05:00
weight_adapter
checkpoint_pickle.py
cli_args.py
clip_config_bigg.json
clip_model.py
clip_vision_config_g.json
clip_vision_config_h.json
clip_vision_config_vitl_336_llava.json
clip_vision_config_vitl_336.json
clip_vision_config_vitl.json
clip_vision_siglip2_base_naflex.json
clip_vision_siglip_384.json
clip_vision_siglip_512.json
clip_vision.py Add image sizes to clip vision outputs. (#11923) 2026-01-16 23:02:28 -05:00
conds.py
context_windows.py
controlnet.py
diffusers_convert.py
diffusers_load.py
float.py Optimize nvfp4 lora applying. (#11866) 2026-01-14 00:49:38 -05:00
gligen.py
hooks.py
latent_formats.py
lora_convert.py
lora.py
model_base.py Support the Anima model. (#12012) 2026-01-21 19:44:28 -05:00
model_detection.py Support the Anima model. (#12012) 2026-01-21 19:44:28 -05:00
model_management.py
model_patcher.py
model_sampling.py
nested_tensor.py
ops.py Make loras work on nvfp4 models. (#11837) 2026-01-12 22:33:54 -05:00
options.py
patcher_extension.py
pixel_space_convert.py
quant_ops.py Optimize nvfp4 lora applying. (#11866) 2026-01-14 00:49:38 -05:00
rmsnorm.py
sample.py
sampler_helpers.py
samplers.py
sd1_clip_config.json
sd1_clip.py
sd.py Support LTX2 tiny vae (taeltx_2) (#11929) 2026-01-21 23:03:51 -05:00
sdxl_clip.py
supported_models_base.py
supported_models.py Support the Anima model. (#12012) 2026-01-21 19:44:28 -05:00
utils.py Add LyCoris LoKr MLP layer support for Flux2 (#11997) 2026-01-20 23:18:33 -05:00