Commit Graph

5139 Commits

Author SHA1 Message Date
Rando717
0fc01bff3e
Update install-legacy.bat 2025-09-22 02:25:30 +02:00
Rando717
097a865bc5
Update install-for-older-amd.bat
sigh...empty lines
2025-09-22 02:24:53 +02:00
Rando717
effa9afb52
Update install-n.bat
delete deepcache
2025-09-22 02:00:50 +02:00
Rando717
c5a3a4bc3e
Update install-legacy.bat
delete deepcache
2025-09-22 02:00:28 +02:00
Rando717
59079ff00a
Update install-for-older-amd.bat
delete deepcache
2025-09-22 01:59:46 +02:00
Rando717
1254017225
Delete deepcache-sample.json 2025-09-22 01:58:41 +02:00
patientx
9d2b926f56
Merge branch 'comfyanonymous:master' into master 2025-09-21 16:57:32 +03:00
comfyanonymous
d1d9eb94b1
Lower wan memory estimation value a bit. (#9964)
Previous pr reduced the peak memory requirement.
2025-09-20 22:09:35 -04:00
Kohaku-Blueleaf
7be2b49b6b
Fix LoRA Trainer bugs with FP8 models. (#9854)
* Fix adapter weight init

* Fix fp8 model training

* Avoid inference tensor
2025-09-20 21:24:48 -04:00
Jedrzej Kosinski
9ed3c5cc09
[Reviving #5709] Add strength input to Differential Diffusion (#9957)
* Update nodes_differential_diffusion.py

* Update nodes_differential_diffusion.py

* Make strength optional to avoid validation errors when loading old workflows, adjust step

---------

Co-authored-by: ThereforeGames <eric@sparknight.io>
2025-09-20 21:10:39 -04:00
patientx
945cd4af42
Merge branch 'comfyanonymous:master' into master 2025-09-20 16:17:58 +03:00
comfyanonymous
66241cef31
Add inputs for character replacement to the WanAnimateToVideo node. (#9960) 2025-09-20 02:24:10 -04:00
patientx
c62e820d45
Merge branch 'comfyanonymous:master' into master 2025-09-20 01:51:06 +03:00
comfyanonymous
e8df53b764
Update WanAnimateToVideo to more easily extend videos. (#9959) 2025-09-19 18:48:56 -04:00
Alexander Piskun
852704c81a
fix(seedream4): add flag to ignore error on partial success (#9952) 2025-09-19 16:04:51 -04:00
Alexander Piskun
9fdf8c25ab
api_nodes: reduce default timeout from 7 days to 2 hours (#9918) 2025-09-19 16:02:43 -04:00
comfyanonymous
dc95b6acc0
Basic WIP support for the wan animate model. (#9939) 2025-09-19 03:07:17 -04:00
Christian Byrne
711bcf33ee
Bump frontend to 1.26.13 (#9933) 2025-09-19 03:03:30 -04:00
comfyanonymous
24b0fce099
Do padding of audio embed in model for humo for more flexibility. (#9935) 2025-09-18 19:54:16 -04:00
Jodh Singh
1ea8c54064
make kernel of same type as image to avoid mismatch issues (#9932) 2025-09-18 19:51:16 -04:00
DELUXA
8d6653fca6
Enable fp8 ops by default on gfx1200 (#9926) 2025-09-18 19:50:37 -04:00
patientx
3ffaddd005
Merge pull request #307 from Rando717/Rando717-cleanup
removed pause on Git error
2025-09-18 03:02:55 +03:00
patientx
50e281dc6d
Merge branch 'comfyanonymous:master' into master 2025-09-18 03:02:08 +03:00
comfyanonymous
dd611a7700
Support the HuMo 17B model. (#9912) 2025-09-17 18:39:24 -04:00
Rando717
685b542d17
Update comfyui.bat (git)
- removed pause on git check, so batch can continue on error
- expanded error message with Git url
2025-09-17 23:51:38 +02:00
Rando717
005853ccaf
Update comfyui-n.bat (git) 2025-09-17 23:49:49 +02:00
patientx
a8b63b21fe
Merge branch 'comfyanonymous:master' into master 2025-09-17 12:29:45 +03:00
comfyanonymous
9288c78fc5
Support the HuMo model. (#9903) 2025-09-17 00:12:48 -04:00
rattus128
e42682b24e
Reduce Peak WAN inference VRAM usage (#9898)
* flux: Do the xq and xk ropes one at a time

This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.

Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.

This reduces peak VRAM usage for some WAN2.2 inferences (at least).

* wan: Optimize qkv intermediates on attention

As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
2025-09-16 19:21:14 -04:00
patientx
9cdd9e38d2
Merge branch 'comfyanonymous:master' into master 2025-09-17 00:30:44 +03:00
comfyanonymous
a39ac59c3e
Add encoder part of whisper large v3 as an audio encoder model. (#9894)
Not useful yet but some models use it.
2025-09-16 01:19:50 -04:00
blepping
1a85483da1
Fix depending on asserts to raise an exception in BatchedBrownianTree and Flash attn module (#9884)
Correctly handle the case where w0 is passed by kwargs in BatchedBrownianTree
2025-09-15 20:05:03 -04:00
patientx
a08c1e1613
Merge branch 'comfyanonymous:master' into master 2025-09-16 01:22:18 +03:00
comfyanonymous
47a9cde5d3
Support the omnigen2 umo lora. (#9886) 2025-09-15 18:10:55 -04:00
patientx
31ed58199b
Merge pull request #304 from Rando717/Rando717-cleanup
comfyui.bat cleanup with Git check
2025-09-15 12:35:24 +03:00
Rando717
d6495b10a7
Update comfyui.bat (typo) 2025-09-15 00:49:28 +02:00
Rando717
6571c9f3ef
Update comfyui.bat 2025-09-14 23:39:19 +02:00
Rando717
2f256e48cb
Update comfyui-n.bat
- reverted previous commit (removes colored + - signs)
- added check for git (install+version) and current commit hash

- stylized output: [INFO], [PASS], [FAIL]
- added launching app info
2025-09-14 23:25:39 +02:00
patientx
b01a6f3baf
Update README.md 2025-09-14 13:24:24 +03:00
patientx
09bd37e843
Merge branch 'comfyanonymous:master' into master 2025-09-14 13:23:58 +03:00
patientx
9c8fe3ac63
Update README.md 2025-09-14 13:23:45 +03:00
comfyanonymous
4f1f26ac6c
Add that hunyuan image is supported to readme. (#9857) 2025-09-14 04:05:38 -04:00
Jedrzej Kosinski
f228367c5e
Make ModuleNotFoundError ImportError instead (#9850) 2025-09-13 21:34:21 -04:00
patientx
78c0630849
Merge branch 'comfyanonymous:master' into master 2025-09-14 03:43:29 +03:00
comfyanonymous
80b7c9455b
Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
blepping
c1297f4eb3
Add support for Chroma Radiance (#9682)
* Initial Chroma Radiance support

* Minor Chroma Radiance cleanups

* Update Radiance nodes to ensure latents/images are on the intermediate device

* Fix Chroma Radiance memory estimation.

* Increase Chroma Radiance memory usage factor

* Increase Chroma Radiance memory usage factor once again

* Ensure images are multiples of 16 for Chroma Radiance
Add batch dimension and fix channels when necessary in ChromaRadianceImageToLatent node

* Tile Chroma Radiance NeRF to reduce memory consumption, update memory usage factor

* Update Radiance to support conv nerf final head type.

* Allow setting NeRF embedder dtype for Radiance
Bump Radiance nerf tile size to 32
Support EasyCache/LazyCache on Radiance (maybe)

* Add ChromaRadianceStubVAE node

* Crop Radiance image inputs to multiples of 16 instead of erroring to be in line with existing VAE behavior

* Convert Chroma Radiance nodes to V3 schema.

* Add ChromaRadianceOptions node and backend support.
Cleanups/refactoring to reduce code duplication with Chroma.

* Fix overriding the NeRF embedder dtype for Chroma Radiance

* Minor Chroma Radiance cleanups

* Move Chroma Radiance to its own directory in ldm
Minor code cleanups and tooltip improvements

* Fix Chroma Radiance embedder dtype overriding

* Remove Radiance dynamic nerf_embedder dtype override feature

* Unbork Radiance NeRF embedder init

* Remove Chroma Radiance image conversion and stub VAE nodes
Add a chroma_radiance option to the VAELoader builtin node which uses comfy.sd.PixelspaceConversionVAE
Add a PixelspaceConversionVAE to comfy.sd for converting BHWC 0..1 <-> BCHW -1..1
2025-09-13 17:58:43 -04:00
Kimbing Ng
e5e70636e7
Remove single quote pattern to avoid wrong matches (#9842) 2025-09-13 16:59:19 -04:00
Rando717
4b607e54fd
Update comfyui-n.bat
** Stylized git pull output
2025-09-13 15:33:52 +02:00
Rando717
7ef7b0a1b5
Update comfyui.bat
** Stylized git pull output
2025-09-13 15:29:32 +02:00
patientx
596049a855
Merge branch 'comfyanonymous:master' into master 2025-09-13 14:38:42 +03:00