Commit Graph

4862 Commits

Author SHA1 Message Date
doctorpangloss
ac0694a7bd Fix #46 enable node blacklisting using --blacklist-custom-nodes ComfyUI-Manager / config.blacklist_custom_nodes = ["ComfyUI-Manager"] 2025-09-23 13:50:05 -07:00
doctorpangloss
2a881a768e Fix #45 now test 3.10 and 3.12 images. NVIDIA doesn't support 3.11 at all! 2025-09-23 13:22:54 -07:00
doctorpangloss
f6d3962c77 Fix tests
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-09-23 12:42:12 -07:00
doctorpangloss
6a48fc1c40 Pass all tests. Add Qwen-Edit and other Qwen checkpoints for testing 2025-09-23 12:15:41 -07:00
doctorpangloss
6af812f9a8 Fix custom model paths config paths, tweak tests 2025-09-23 11:01:48 -07:00
doctorpangloss
4a3feee1a2 Tweaks 2025-09-23 10:28:36 -07:00
doctorpangloss
dd9a781654 Fix linting issues 2025-09-23 10:19:30 -07:00
comfyanonymous
b8730510db ComfyUI version 0.3.60 2025-09-23 11:50:33 -04:00
Alexander Piskun
e808790799
feat(api-nodes): add wan t2i, t2v, i2v nodes (#9996) 2025-09-23 11:36:47 -04:00
ComfyUI Wiki
145b0e4f79
update template to 0.1.86 (#9998)
* update template to 0.1.84

* update template to 0.1.85

* Update template to 0.1.86
2025-09-23 11:22:35 -04:00
comfyanonymous
707b2638ec
Fix bug with WanAnimateToVideo. (#9990) 2025-09-22 17:34:33 -04:00
doctorpangloss
66cf9b41f2 Move nodes_chroma_radiance 2025-09-22 14:33:24 -07:00
doctorpangloss
fd6e5c2c8d move test_cache_control 2025-09-22 14:31:21 -07:00
doctorpangloss
8b96170e53 move cache middleware 2025-09-22 14:30:38 -07:00
doctorpangloss
a9a0f96408 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-09-22 14:29:50 -07:00
comfyanonymous
8a5ac527e6
Fix bug with WanAnimateToVideo node. (#9988) 2025-09-22 17:26:58 -04:00
Christian Byrne
e3206351b0
add offset param (#9977) 2025-09-22 17:12:32 -04:00
comfyanonymous
1fee8827cb
Support for qwen edit plus model. Use the new TextEncodeQwenImageEditPlus. (#9986) 2025-09-22 16:49:48 -04:00
comfyanonymous
27bc181c49
Set some wan nodes as no longer experimental. (#9976) 2025-09-21 19:48:31 -04:00
comfyanonymous
d1d9eb94b1
Lower wan memory estimation value a bit. (#9964)
Previous pr reduced the peak memory requirement.
2025-09-20 22:09:35 -04:00
Kohaku-Blueleaf
7be2b49b6b
Fix LoRA Trainer bugs with FP8 models. (#9854)
* Fix adapter weight init

* Fix fp8 model training

* Avoid inference tensor
2025-09-20 21:24:48 -04:00
Jedrzej Kosinski
9ed3c5cc09
[Reviving #5709] Add strength input to Differential Diffusion (#9957)
* Update nodes_differential_diffusion.py

* Update nodes_differential_diffusion.py

* Make strength optional to avoid validation errors when loading old workflows, adjust step

---------

Co-authored-by: ThereforeGames <eric@sparknight.io>
2025-09-20 21:10:39 -04:00
comfyanonymous
66241cef31
Add inputs for character replacement to the WanAnimateToVideo node. (#9960) 2025-09-20 02:24:10 -04:00
doctorpangloss
c48163e78c Fix misalignment between EXIF tags and whatever pillow was actually doing 2025-09-19 16:14:40 -07:00
comfyanonymous
e8df53b764
Update WanAnimateToVideo to more easily extend videos. (#9959) 2025-09-19 18:48:56 -04:00
doctorpangloss
b9368317af Various fixes
- raise on file load failures in the base nodes
 - transformers models should load with trust_remote_code False whenever possible
 - fix canonicalize_map call for windows-linux interopability
2025-09-19 14:54:54 -07:00
Alexander Piskun
852704c81a
fix(seedream4): add flag to ignore error on partial success (#9952) 2025-09-19 16:04:51 -04:00
Alexander Piskun
9fdf8c25ab
api_nodes: reduce default timeout from 7 days to 2 hours (#9918) 2025-09-19 16:02:43 -04:00
comfyanonymous
dc95b6acc0
Basic WIP support for the wan animate model. (#9939) 2025-09-19 03:07:17 -04:00
Christian Byrne
711bcf33ee
Bump frontend to 1.26.13 (#9933) 2025-09-19 03:03:30 -04:00
comfyanonymous
24b0fce099
Do padding of audio embed in model for humo for more flexibility. (#9935) 2025-09-18 19:54:16 -04:00
Jodh Singh
1ea8c54064
make kernel of same type as image to avoid mismatch issues (#9932) 2025-09-18 19:51:16 -04:00
DELUXA
8d6653fca6
Enable fp8 ops by default on gfx1200 (#9926) 2025-09-18 19:50:37 -04:00
doctorpangloss
ffbb2f7cd3 Fix bug in model_downloader 2025-09-18 14:47:28 -07:00
doctorpangloss
1cf77157a1 Enable inference tests in CI 2025-09-18 13:56:30 -07:00
doctorpangloss
e6762bb82a Use pylint dynamic member correctly 2025-09-18 13:42:05 -07:00
doctorpangloss
bc201cea4d Improve tests, fix issues with alternate filenames, improve group offloading support for transformers models 2025-09-18 13:25:08 -07:00
doctorpangloss
79b8723f61 Various fixes
- Fix 16 bit exif saving for PNGs
 - Validate alternative filenames correctly
 - Improve speed of test workflows by setting steps to 1
2025-09-17 16:04:05 -07:00
comfyanonymous
dd611a7700
Support the HuMo 17B model. (#9912) 2025-09-17 18:39:24 -04:00
doctorpangloss
3d23c298a2 Fix JPEG WebP and TIFF EXIF 2025-09-17 12:18:15 -07:00
Benjamin Berman
4839189809 16 bit EXIF is broken 2025-09-17 12:04:44 -07:00
Benjamin Berman
db546c4826 Sometimes progress messages of images with metadata are not valid. Need to investigate why. 2025-09-16 22:04:39 -07:00
comfyanonymous
9288c78fc5
Support the HuMo model. (#9903) 2025-09-17 00:12:48 -04:00
rattus128
e42682b24e
Reduce Peak WAN inference VRAM usage (#9898)
* flux: Do the xq and xk ropes one at a time

This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.

Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.

This reduces peak VRAM usage for some WAN2.2 inferences (at least).

* wan: Optimize qkv intermediates on attention

As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
2025-09-16 19:21:14 -04:00
comfyanonymous
a39ac59c3e
Add encoder part of whisper large v3 as an audio encoder model. (#9894)
Not useful yet but some models use it.
2025-09-16 01:19:50 -04:00
blepping
1a85483da1
Fix depending on asserts to raise an exception in BatchedBrownianTree and Flash attn module (#9884)
Correctly handle the case where w0 is passed by kwargs in BatchedBrownianTree
2025-09-15 20:05:03 -04:00
doctorpangloss
e7f7a8f80c Tweak dockerfile 2025-09-15 15:47:22 -07:00
comfyanonymous
47a9cde5d3
Support the omnigen2 umo lora. (#9886) 2025-09-15 18:10:55 -04:00
comfyanonymous
4f1f26ac6c
Add that hunyuan image is supported to readme. (#9857) 2025-09-14 04:05:38 -04:00
Jedrzej Kosinski
f228367c5e
Make ModuleNotFoundError ImportError instead (#9850) 2025-09-13 21:34:21 -04:00