Commit Graph

442 Commits

Author SHA1 Message Date
Yousef Rafat
86348baa9e updated based on feedback 2025-11-19 23:01:17 +02:00
Yousef Rafat
25f7bbed78 ruff check 2025-10-13 19:46:03 +03:00
Yousef Rafat
4653b9008d final changes 2025-10-13 19:28:54 +03:00
Yousef Rafat
4908e7412e bug fixes for siglip2 to work 2025-10-12 00:04:43 +03:00
Yousef Rafat
89fc51fb91 small bug fixes 2025-10-10 20:35:52 +03:00
Yousef Rafat
e684ff2505 a lot of fixes + siglip2_base support 2025-10-10 19:11:50 +03:00
Yousef Rafat
4c782e3395 fixes to make the model work 2025-10-08 19:32:32 +03:00
Yousef Rafat
220c65dc5f fixed the syncform logic + condition-related fixes
the trimming fn needs an update because of the over-trimming
2025-10-06 23:38:47 +03:00
Yousef Rafat
95d2aae264 syncformer fix + some fixes 2025-10-06 18:47:21 +03:00
Yousef Rafat
4b6c08110d large optimizations and some fixes 2025-10-04 23:33:52 +03:00
Yousef Rafat
663d971830 work on the conditioning 2025-10-04 00:18:03 +03:00
Yousef Rafat
cc3a1389ad some fixes in model loading and nodes 2025-10-01 00:06:01 +03:00
Yousef Rafat
42a265cddf fixed multiple errors in nodes and model loading 2025-09-29 22:44:40 +03:00
Yousef Rafat
8311b156ad allowed returning frames 2025-09-27 23:47:30 +03:00
Yousef R. Gamaleldin
3773d0d16b
Merge branch 'master' into yousef-hunyuan-foley 2025-09-27 14:16:54 +03:00
Yousef Rafat
786c386c15 ... 2025-09-27 14:08:54 +03:00
Yousef Rafat
12824eac0d init 2025-09-27 13:17:20 +03:00
comfyanonymous
fccab99ec0
Fix issue with .view() in HuMo. (#10014) 2025-09-24 20:09:42 -04:00
comfyanonymous
e8df53b764
Update WanAnimateToVideo to more easily extend videos. (#9959) 2025-09-19 18:48:56 -04:00
comfyanonymous
dc95b6acc0
Basic WIP support for the wan animate model. (#9939) 2025-09-19 03:07:17 -04:00
comfyanonymous
24b0fce099
Do padding of audio embed in model for humo for more flexibility. (#9935) 2025-09-18 19:54:16 -04:00
comfyanonymous
dd611a7700
Support the HuMo 17B model. (#9912) 2025-09-17 18:39:24 -04:00
comfyanonymous
9288c78fc5
Support the HuMo model. (#9903) 2025-09-17 00:12:48 -04:00
rattus128
e42682b24e
Reduce Peak WAN inference VRAM usage (#9898)
* flux: Do the xq and xk ropes one at a time

This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.

Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.

This reduces peak VRAM usage for some WAN2.2 inferences (at least).

* wan: Optimize qkv intermediates on attention

As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
2025-09-16 19:21:14 -04:00
blepping
1a85483da1
Fix depending on asserts to raise an exception in BatchedBrownianTree and Flash attn module (#9884)
Correctly handle the case where w0 is passed by kwargs in BatchedBrownianTree
2025-09-15 20:05:03 -04:00
Jedrzej Kosinski
f228367c5e
Make ModuleNotFoundError ImportError instead (#9850) 2025-09-13 21:34:21 -04:00
comfyanonymous
80b7c9455b
Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
blepping
c1297f4eb3
Add support for Chroma Radiance (#9682)
* Initial Chroma Radiance support

* Minor Chroma Radiance cleanups

* Update Radiance nodes to ensure latents/images are on the intermediate device

* Fix Chroma Radiance memory estimation.

* Increase Chroma Radiance memory usage factor

* Increase Chroma Radiance memory usage factor once again

* Ensure images are multiples of 16 for Chroma Radiance
Add batch dimension and fix channels when necessary in ChromaRadianceImageToLatent node

* Tile Chroma Radiance NeRF to reduce memory consumption, update memory usage factor

* Update Radiance to support conv nerf final head type.

* Allow setting NeRF embedder dtype for Radiance
Bump Radiance nerf tile size to 32
Support EasyCache/LazyCache on Radiance (maybe)

* Add ChromaRadianceStubVAE node

* Crop Radiance image inputs to multiples of 16 instead of erroring to be in line with existing VAE behavior

* Convert Chroma Radiance nodes to V3 schema.

* Add ChromaRadianceOptions node and backend support.
Cleanups/refactoring to reduce code duplication with Chroma.

* Fix overriding the NeRF embedder dtype for Chroma Radiance

* Minor Chroma Radiance cleanups

* Move Chroma Radiance to its own directory in ldm
Minor code cleanups and tooltip improvements

* Fix Chroma Radiance embedder dtype overriding

* Remove Radiance dynamic nerf_embedder dtype override feature

* Unbork Radiance NeRF embedder init

* Remove Chroma Radiance image conversion and stub VAE nodes
Add a chroma_radiance option to the VAELoader builtin node which uses comfy.sd.PixelspaceConversionVAE
Add a PixelspaceConversionVAE to comfy.sd for converting BHWC 0..1 <-> BCHW -1..1
2025-09-13 17:58:43 -04:00
comfyanonymous
a3b04de700
Hunyuan refiner vae now works with tiled. (#9836) 2025-09-12 19:46:46 -04:00
Jedrzej Kosinski
d7f40442f9
Enable Runtime Selection of Attention Functions (#9639)
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options

* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added

* Fix memory usage issue with inspect

* Made WAN attention receive transformer_options, test node added to wan to test out attention override later

* Added **kwargs to all attention functions so transformer_options could potentially be passed through

* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention

* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)

* Make flux work with optimized_attention_override

* Add logs to verify optimized_attention_override is passed all the way into attention function

* Make Qwen work with optimized_attention_override

* Made hidream work with optimized_attention_override

* Made wan patches_replace work with optimized_attention_override

* Made SD3 work with optimized_attention_override

* Made HunyuanVideo work with optimized_attention_override

* Made Mochi work with optimized_attention_override

* Made LTX work with optimized_attention_override

* Made StableAudio work with optimized_attention_override

* Made optimized_attention_override work with ACE Step

* Made Hunyuan3D work with optimized_attention_override

* Make CosmosPredict2 work with optimized_attention_override

* Made CosmosVideo work with optimized_attention_override

* Made Omnigen 2 work with optimized_attention_override

* Made StableCascade work with optimized_attention_override

* Made AuraFlow work with optimized_attention_override

* Made Lumina work with optimized_attention_override

* Made Chroma work with optimized_attention_override

* Made SVD work with optimized_attention_override

* Fix WanI2VCrossAttention so that it expects to receive transformer_options

* Fixed Wan2.1 Fun Camera transformer_options passthrough

* Fixed WAN 2.1 VACE transformer_options passthrough

* Add optimized to get_attention_function

* Disable attention logs for now

* Remove attention logging code

* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case

* Satisfy ruff

* Remove AttentionOverrideTest node, that's something to cook up for later
2025-09-12 18:07:38 -04:00
comfyanonymous
33bd9ed9cb
Implement hunyuan image refiner model. (#9817) 2025-09-12 00:43:20 -04:00
comfyanonymous
e01e99d075
Support hunyuan image distilled model. (#9807) 2025-09-10 23:17:34 -04:00
comfyanonymous
543888d3d8
Fix lowvram issue with hunyuan image vae. (#9794) 2025-09-10 02:15:34 -04:00
comfyanonymous
85e34643f8
Support hunyuan image 2.1 regular model. (#9792) 2025-09-10 02:05:07 -04:00
comfyanonymous
5c33872e2f
Fix issue on old torch. (#9791) 2025-09-10 00:23:47 -04:00
comfyanonymous
b288fb0db8
Small refactor of some vae code. (#9787) 2025-09-09 18:09:56 -04:00
Yousef Rafat
2ac8999287 final 2025-09-09 23:04:03 +03:00
Yousef Rafat
64124224e7 bug fixes + added some features 2025-09-09 01:07:36 +03:00
Yousef Rafat
233e4415a1 . 2025-09-06 01:47:53 +03:00
Yousef Rafat
f8d48917ba additional styling 2025-09-06 01:22:23 +03:00
Yousef Rafat
57c15f970c styling fixes 2025-09-06 01:17:04 +03:00
Yousef Rafat
6e9335d638 Merge branch 'yousef-higgsv2' of https://github.com/yousef-rafat/ComfyUI into yousef-higgsv2 2025-09-05 23:55:21 +03:00
Yousef Rafat
df4b6a26d1 removed test files 2025-09-05 23:55:07 +03:00
Yousef R. Gamaleldin
1cff9b8cc6
Merge branch 'master' into yousef-higgsv2 2025-09-05 23:53:29 +03:00
Yousef Rafat
254622d7c6 init 2025-09-05 23:47:56 +03:00
comfyanonymous
2ee7879a0b
Fix lowvram issues with hunyuan3d 2.1 (#9735) 2025-09-05 14:57:35 -04:00
Yousef R. Gamaleldin
261421e218
Add Hunyuan 3D 2.1 Support (#8714) 2025-09-04 20:36:20 -04:00
comfyanonymous
72855db715
Fix potential rope issue. (#9710) 2025-09-03 22:20:13 -04:00
comfyanonymous
e3018c2a5a
uso -> uxo/uno as requested. (#9688) 2025-09-02 16:12:07 -04:00
comfyanonymous
3412d53b1d
USO style reference. (#9677)
Load the projector.safetensors file with the ModelPatchLoader node and use
the siglip_vision_patch14_384.safetensors "clip vision" model and the
USOStyleReferenceNode.
2025-09-02 15:36:22 -04:00