Commit Graph

273 Commits

Author SHA1 Message Date
doctorpangloss
4a3feee1a2 Tweaks 2025-09-23 10:28:36 -07:00
doctorpangloss
a9a0f96408 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-09-22 14:29:50 -07:00
blepping
1a85483da1
Fix depending on asserts to raise an exception in BatchedBrownianTree and Flash attn module (#9884)
Correctly handle the case where w0 is passed by kwargs in BatchedBrownianTree
2025-09-15 20:05:03 -04:00
Jedrzej Kosinski
f228367c5e
Make ModuleNotFoundError ImportError instead (#9850) 2025-09-13 21:34:21 -04:00
Jedrzej Kosinski
d7f40442f9
Enable Runtime Selection of Attention Functions (#9639)
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options

* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added

* Fix memory usage issue with inspect

* Made WAN attention receive transformer_options, test node added to wan to test out attention override later

* Added **kwargs to all attention functions so transformer_options could potentially be passed through

* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention

* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)

* Make flux work with optimized_attention_override

* Add logs to verify optimized_attention_override is passed all the way into attention function

* Make Qwen work with optimized_attention_override

* Made hidream work with optimized_attention_override

* Made wan patches_replace work with optimized_attention_override

* Made SD3 work with optimized_attention_override

* Made HunyuanVideo work with optimized_attention_override

* Made Mochi work with optimized_attention_override

* Made LTX work with optimized_attention_override

* Made StableAudio work with optimized_attention_override

* Made optimized_attention_override work with ACE Step

* Made Hunyuan3D work with optimized_attention_override

* Make CosmosPredict2 work with optimized_attention_override

* Made CosmosVideo work with optimized_attention_override

* Made Omnigen 2 work with optimized_attention_override

* Made StableCascade work with optimized_attention_override

* Made AuraFlow work with optimized_attention_override

* Made Lumina work with optimized_attention_override

* Made Chroma work with optimized_attention_override

* Made SVD work with optimized_attention_override

* Fix WanI2VCrossAttention so that it expects to receive transformer_options

* Fixed Wan2.1 Fun Camera transformer_options passthrough

* Fixed WAN 2.1 VACE transformer_options passthrough

* Add optimized to get_attention_function

* Disable attention logs for now

* Remove attention logging code

* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case

* Satisfy ruff

* Remove AttentionOverrideTest node, that's something to cook up for later
2025-09-12 18:07:38 -04:00
comfyanonymous
33bd9ed9cb
Implement hunyuan image refiner model. (#9817) 2025-09-12 00:43:20 -04:00
comfyanonymous
b288fb0db8
Small refactor of some vae code. (#9787) 2025-09-09 18:09:56 -04:00
doctorpangloss
4f6f3e9197 Fix linting issues, improve HooksSupport dummy implementation 2025-09-09 13:26:46 -07:00
doctorpangloss
f5e29f0e61 Fix sage attention for qwen-image, fix Qwen Image VAE memory usage, improve compatibility with hooks when using KSampler based workflows and non-ModelPatcher model manageable-stuff 2025-09-09 12:58:23 -07:00
doctorpangloss
479e20d233 Fix attention dispatch bug 2025-08-26 16:40:45 -07:00
doctorpangloss
9fcc597e1f Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-22 18:29:55 -07:00
doctorpangloss
735a133ad4 Update to 0.3.51 2025-08-22 17:29:18 -07:00
contentis
fe31ad0276
Add elementwise fusions (#9495)
* Add elementwise fusions

* Add addcmul pattern to Qwen
2025-08-22 19:39:15 -04:00
doctorpangloss
dfc47e0611 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-22 13:24:52 -07:00
comfyanonymous
9df8792d4b
Make last PR not crash comfy on old pytorch. (#9324) 2025-08-13 15:12:41 -04:00
contentis
3da5a07510
SDPA backend priority (#9299) 2025-08-13 14:53:27 -04:00
doctorpangloss
cd17b42664 Qwen Image with sage attention workarounds 2025-08-07 17:29:23 -07:00
doctorpangloss
87bed08124 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-01 16:05:47 -07:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity (#9057) 2025-07-30 19:25:56 -04:00
doctorpangloss
03e5430121 Improvements for Wan 2.2 support
- add xet support and add the xet cache to manageable directories
 - xet is enabled by default
 - fix logging to root in various places
 - improve logging about model unloading and loading
 - TorchCompileNode now supports the VAE
 - torchaudio missing will cause less noise in the logs
 - feature flags will assume to be supporting everything in the distributed progress context
 - fixes progress notifications
2025-07-28 14:36:27 -07:00
doctorpangloss
f6339e8115 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-24 12:52:54 -07:00
comfyanonymous
91d40086db
Fix pytorch warning. (#8593) 2025-06-19 11:04:52 -04:00
doctorpangloss
1d901e32eb Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 13:20:31 -07:00
doctorpangloss
3d0306b89f more fixes from pylint 2025-06-17 11:36:41 -07:00
doctorpangloss
82388d51a2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 10:35:10 -07:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
Benjamin Berman
b6d3f1fb08 Accept workflows from the command line 2025-05-07 14:53:39 -07:00
doctorpangloss
5823497d55 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-04-21 13:14:36 -07:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models

* activations_shape should be a list
2025-04-04 21:27:54 -04:00
doctorpangloss
ffc1912eff Fix issues with tests 2025-04-04 08:27:33 -07:00
doctorpangloss
040a324346 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-03-29 15:57:24 -07:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
FeepingCreature
9c98c6358b
Tolerate missing @torch.library.custom_op (#7234)
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
doctorpangloss
693038738a Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-02-24 09:39:26 -08:00
comfyanonymous
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
comfyanonymous
aff16532d4 Remove some useless code. 2025-02-22 04:45:14 -05:00
comfyanonymous
1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
comfyanonymous
e5ea112a90 Support Lumina 2 model. 2025-02-04 04:16:30 -05:00
Dr.Lt.Data
0a0df5f136
better guide message for sageattention (#6634) 2025-02-02 09:26:47 -05:00
doctorpangloss
a3452f6e6a Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-01-28 13:45:51 -08:00
comfyanonymous
96e2a45193 Remove useless code. 2025-01-23 05:56:23 -05:00
doctorpangloss
cf3c96e593 Cosmos support 2025-01-16 12:39:05 -08:00
doctorpangloss
631d9e44c6 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-01-16 09:58:02 -08:00
comfyanonymous
008761166f Optimize first attention block in cosmos VAE. 2025-01-15 21:48:46 -05:00
comfyanonymous
129d8908f7 Add argument to skip the output reshaping in the attention functions. 2025-01-10 06:27:37 -05:00
comfyanonymous
79eea51a1d Fix and enforce all ruff W rules. 2025-01-01 03:08:33 -05:00
doctorpangloss
9d5a5dd533 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-12-28 14:24:27 -08:00
comfyanonymous
d170292594 Remove some trailing white space. 2024-12-27 18:02:30 -05:00