Commit Graph

2391 Commits

Author SHA1 Message Date
patientx
7ba2a8d3b0
Merge branch 'comfyanonymous:master' into master 2025-07-28 22:15:10 +03:00
comfyanonymous
c60dc4177c
Remove unecessary clones in the wan2.2 VAE. (#9083) 2025-07-28 14:48:19 -04:00
Christopher Anderson
b5ede18481 This will allow much better support for gfx1032 and other things not specifically named 2025-07-29 04:21:45 +10:00
patientx
769ab3bd25
Merge branch 'comfyanonymous:master' into master 2025-07-28 15:21:30 +03:00
comfyanonymous
a88788dce6
Wan 2.2 support. (#9080) 2025-07-28 08:00:23 -04:00
patientx
5a45e12b61
Merge branch 'comfyanonymous:master' into master 2025-07-26 14:09:19 +03:00
comfyanonymous
0621d73a9c
Remove useless code. (#9059) 2025-07-26 04:44:19 -04:00
comfyanonymous
e6e5d33b35
Remove useless code. (#9041)
This is only needed on old pytorch 2.0 and older.
2025-07-25 04:58:28 -04:00
patientx
c3bf1d95e2
Merge branch 'comfyanonymous:master' into master 2025-07-25 10:20:29 +03:00
Eugene Fairley
4293e4da21
Add WAN ATI support (#8874)
* Add WAN ATI support

* Fixes

* Fix length

* Remove extra functions

* Fix

* Fix

* Ruff fix

* Remove torch.no_grad

* Add batch trajectory logic

* Scale inputs before and after motion patch

* Batch image/trajectory

* Ruff fix

* Clean up
2025-07-24 20:59:19 -04:00
patientx
970b7fb84f
Merge branch 'comfyanonymous:master' into master 2025-07-24 22:30:55 +03:00
comfyanonymous
69cb57b342
Print xpu device name. (#9035) 2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX (#8585)
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
patientx
30539d0d13
Merge branch 'comfyanonymous:master' into master 2025-07-24 13:59:09 +03:00
Kohaku-Blueleaf
eb2f78b4e0
[Training Node] algo support, grad acc, optional grad ckpt (#9015)
* Add factorization utils for lokr

* Add lokr train impl

* Add loha train impl

* Add adapter map for algo selection

* Add optional grad ckpt and algo selection

* Update __init__.py

* correct key name for loha

* Use custom fwd/bwd func and better init for loha

* Support gradient accumulation

* Fix bugs of loha

* use more stable init

* Add OFT training

* linting
2025-07-23 20:57:27 -04:00
chaObserv
e729a5cc11
Separate denoised and noise estimation in Euler CFG++ (#9008)
This will change their behavior with the sampling CONST type.
It also combines euler_cfg_pp and euler_ancestral_cfg_pp into one main function.
2025-07-23 19:47:05 -04:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 (#9029) 2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. (#9027) 2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. (#9026) 2025-07-23 18:10:59 -04:00
patientx
58f3250106
Merge branch 'comfyanonymous:master' into master 2025-07-23 23:49:46 +03:00
comfyanonymous
5ad33787de
Add default device argument. (#9023) 2025-07-23 14:20:49 -04:00
patientx
bd33a5d382
Merge branch 'comfyanonymous:master' into master 2025-07-23 03:27:52 +03:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. (#9004) 2025-07-22 15:20:09 -04:00
patientx
b049c1df82
Merge branch 'comfyanonymous:master' into master 2025-07-17 00:49:42 +03:00
comfyanonymous
491fafbd64
Silence clip tokenizer warning. (#8934) 2025-07-16 14:42:07 -04:00
Harel Cain
9bc2798f72
LTXV VAE decoder: switch default padding mode (#8930) 2025-07-16 13:54:38 -04:00
patientx
d22d65cc68
Merge branch 'comfyanonymous:master' into master 2025-07-16 13:58:56 +03:00
comfyanonymous
50afba747c
Add attempt to work around the safetensors mmap issue. (#8928) 2025-07-16 03:42:17 -04:00
patientx
79e3b67425
Merge branch 'comfyanonymous:master' into master 2025-07-15 12:24:08 +03:00
Yoland Yan
543c24108c
Fix wrong reference bug (#8910) 2025-07-14 20:45:55 -04:00
patientx
3845c2ff7a
Merge branch 'comfyanonymous:master' into master 2025-07-12 14:59:05 +03:00
comfyanonymous
b40143984c
Add model detection error hint for lora. (#8880) 2025-07-12 03:49:26 -04:00
patientx
5ede75293f
Merge branch 'comfyanonymous:master' into master 2025-07-11 17:30:21 +03:00
comfyanonymous
938d3e8216
Remove windows line endings. (#8866) 2025-07-11 02:37:51 -04:00
patientx
43514805ed
Merge branch 'comfyanonymous:master' into master 2025-07-10 21:46:22 +03:00
guill
2b653e8c18
Support for async node functions (#8830)
* Support for async execution functions

This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.

Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.

In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.

* Add the execution model tests to CI

* Add a missing file

It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.

* Add the websocket library for automated tests

* Add additional tests for async error cases

Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.

* Add a feature flags message to reduce bandwidth

We now only send 1 preview message of the latest type the client can
support.

We'll add a console warning when the client fails to send a feature
flags message at some point in the future.

* Add async tests to CI

* Don't actually add new tests in this PR

Will do it in a separate PR

* Resolve unit test in GPU-less runner

* Just remove the tests that GHA can't handle

* Change line endings to UNIX-style

* Avoid loading model_management.py so early

Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.
2025-07-10 14:46:19 -04:00
patientx
b621197729
Merge branch 'comfyanonymous:master' into master 2025-07-09 00:16:45 +03:00
chaObserv
aac10ad23a
Add SA-Solver sampler (#8834) 2025-07-08 16:17:06 -04:00
josephrocca
974254218a
Un-hardcode chroma patch_size (#8840) 2025-07-08 15:56:59 -04:00
patientx
ab04a1c165
Merge branch 'comfyanonymous:master' into master 2025-07-06 14:41:03 +03:00
comfyanonymous
75d327abd5
Remove some useless code. (#8812) 2025-07-06 07:07:39 -04:00
patientx
65db0a046f
Merge branch 'comfyanonymous:master' into master 2025-07-06 02:44:08 +03:00
comfyanonymous
ee615ac269
Add warning when loading file unsafely. (#8800) 2025-07-05 14:34:57 -04:00
patientx
94464d7867
Merge branch 'comfyanonymous:master' into master 2025-07-04 11:03:58 +03:00
chaObserv
f41f323c52
Add the denoising step to several samplers (#8780) 2025-07-03 19:20:53 -04:00
patientx
455fc30fd8
Merge branch 'comfyanonymous:master' into master 2025-07-03 08:08:09 +03:00
City
d9277301d2
Initial code for new SLG node (#8759) 2025-07-02 20:13:43 -04:00
patientx
ac99b100ef
Merge branch 'comfyanonymous:master' into master 2025-07-02 12:50:51 +03:00
comfyanonymous
111f583e00
Fallback to regular op when fp8 op throws exception. (#8761) 2025-07-02 00:57:13 -04:00
patientx
fa03718ba9
Merge branch 'comfyanonymous:master' into master 2025-07-01 14:19:30 +03:00
chaObserv
b22e97dcfa
Migrate ER-SDE from VE to VP algorithm and add its sampler node (#8744)
Apply alpha scaling in the algorithm for reverse-time SDE and add custom ER-SDE sampler node for other solver types (SDE, ODE).
2025-07-01 02:38:52 -04:00
patientx
b093eef244
Merge branch 'comfyanonymous:master' into master 2025-06-29 15:26:39 +03:00
comfyanonymous
170c7bb90c
Fix contiguous issue with pytorch nightly. (#8729) 2025-06-29 06:38:40 -04:00
patientx
82eaaa0d10
Merge branch 'comfyanonymous:master' into master 2025-06-29 12:45:29 +03:00
comfyanonymous
396454fa41
Reorder the schedulers so simple is the default one. (#8722) 2025-06-28 18:12:56 -04:00
patientx
95625f5bde
Merge branch 'comfyanonymous:master' into master 2025-06-28 22:39:52 +03:00
xufeng
ba9548f756
“--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes (#8592)
* feat: “--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes

* feat: Simplify custom nodes whitelist logic to use consistent code paths
2025-06-28 15:24:02 -04:00
patientx
14864643e6
Merge branch 'comfyanonymous:master' into master 2025-06-28 01:17:17 +03:00
comfyanonymous
c36be0ea09
Fix memory estimation bug with kontext. (#8709) 2025-06-27 17:21:12 -04:00
patientx
4ab5a4bb6b
Merge branch 'comfyanonymous:master' into master 2025-06-27 21:54:55 +03:00
comfyanonymous
9093301a49
Don't add tiny bit of random noise when VAE encoding. (#8705)
Shouldn't change outputs but might make things a tiny bit more
deterministic.
2025-06-27 14:14:56 -04:00
patientx
76486ca342
Merge branch 'comfyanonymous:master' into master 2025-06-26 18:53:44 +03:00
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. (#8679) 2025-06-26 11:28:41 -04:00
patientx
c1ce148f41
Merge branch 'comfyanonymous:master' into master 2025-06-26 11:34:14 +03:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. (#8672) 2025-06-26 03:39:09 -04:00
patientx
7db0737e86
Merge branch 'comfyanonymous:master' into master 2025-06-26 02:41:07 +03:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. (#8669) 2025-06-25 19:35:57 -04:00
patientx
8ad7604b12
Add files via upload 2025-06-26 01:45:32 +03:00
patientx
65ff46f01e
Merge branch 'comfyanonymous:master' into master 2025-06-25 12:14:05 +03:00
comfyanonymous
7a13f74220
unet -> diffusion model (#8659) 2025-06-25 04:52:34 -04:00
patientx
71a47f608a
Merge branch 'comfyanonymous:master' into master 2025-06-24 23:49:15 +03:00
chaObserv
8042eb20c6
Singlestep DPM++ SDE for RF (#8627)
Refactor the algorithm, and apply alpha scaling.
2025-06-24 14:59:09 -04:00
patientx
2bf05370ad
Merge branch 'comfyanonymous:master' into master 2025-06-21 14:54:22 +03:00
comfyanonymous
1883e70b43
Fix exception when using a noise mask with cosmos predict2. (#8621)
* Fix exception when using a noise mask with cosmos predict2.

* Fix ruff.
2025-06-21 03:30:39 -04:00
patientx
0e967d11b1
Merge branch 'comfyanonymous:master' into master 2025-06-20 16:18:48 +03:00
comfyanonymous
f7fb193712
Small flux optimization. (#8611) 2025-06-20 05:37:32 -04:00
patientx
73dfb38f5b
Merge branch 'comfyanonymous:master' into master 2025-06-20 09:44:52 +03:00
comfyanonymous
7e9267fa77
Make flux controlnet work with sd3 text enc. (#8599) 2025-06-19 18:50:05 -04:00
patientx
c46cc59288
Merge branch 'comfyanonymous:master' into master 2025-06-19 19:02:07 +03:00
comfyanonymous
91d40086db
Fix pytorch warning. (#8593) 2025-06-19 11:04:52 -04:00
patientx
e34ff57933
Add files via upload 2025-06-17 00:09:40 +03:00
patientx
4339cbea46
Merge branch 'comfyanonymous:master' into master 2025-06-16 22:21:35 +03:00
chaObserv
8e81c507d2
Multistep DPM++ SDE samplers for RF (#8541)
Include alpha in sampling and minor refactoring
2025-06-16 14:47:10 -04:00
comfyanonymous
e1c6dc720e
Allow setting min_length with tokenizer_data. (#8547) 2025-06-16 13:43:52 -04:00
patientx
e8327ffc3d
Merge branch 'comfyanonymous:master' into master 2025-06-15 21:42:40 +03:00
comfyanonymous
7ea79ebb9d
Add correct eps to ltxv rmsnorm. (#8542) 2025-06-15 12:21:25 -04:00
patientx
031f2f5120
Merge branch 'comfyanonymous:master' into master 2025-06-15 16:23:38 +03:00
comfyanonymous
d6a2137fc3
Support Cosmos predict2 image to video models. (#8535)
Use the CosmosPredict2ImageToVideoLatent node.
2025-06-14 21:37:07 -04:00
patientx
132d593223
Merge branch 'comfyanonymous:master' into master 2025-06-15 03:01:50 +03:00
chaObserv
53e8d8193c
Generalize SEEDS samplers (#8529)
Restore VP algorithm for RF and refactor noise_coeffs and half-logSNR calculations
2025-06-14 16:58:16 -04:00
patientx
8909c12ea9
Update model_management.py 2025-06-14 13:27:49 +03:00
patientx
8efd441de9
Merge branch 'comfyanonymous:master' into master 2025-06-14 12:43:19 +03:00
comfyanonymous
29596bd53f
Small cosmos attention code refactor. (#8530) 2025-06-14 05:02:05 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
patientx
e3720cd495
Merge branch 'comfyanonymous:master' into master 2025-06-13 15:04:05 +03:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. (#8518) 2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. (#8517) 2025-06-13 07:05:23 -04:00
patientx
a5e9b6729c
Update zluda.py 2025-06-13 01:40:34 +03:00
patientx
7bd5bcd135
Update zluda-default.py 2025-06-13 01:40:11 +03:00
patientx
c06a15d8a5
Update zluda.py 2025-06-13 01:03:45 +03:00
patientx
896bda9003
Update zluda-default.py 2025-06-13 01:03:14 +03:00
patientx
7d5f4074b6
Add files via upload 2025-06-12 16:10:32 +03:00
patientx
79dda39260
Update zluda.py 2025-06-12 13:20:24 +03:00
patientx
08784dc90d
Update zluda.py 2025-06-12 13:19:59 +03:00
patientx
11af025690
Update zluda.py 2025-06-12 13:11:03 +03:00
patientx
828b7636d0
Update zluda.py 2025-06-12 13:10:40 +03:00
patientx
f53791d5d2
Merge branch 'comfyanonymous:master' into master 2025-06-12 00:32:55 +03:00
pythongosssss
50c605e957
Add support for sqlite database (#8444)
* Add support for sqlite database

* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. (#8496)
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. (#8495) 2025-06-11 15:09:11 -04:00
patientx
06ac233007
Merge branch 'comfyanonymous:master' into master 2025-06-10 20:34:42 +03:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. (#8485) 2025-06-10 13:06:24 -04:00
patientx
4bc3866c67
Merge branch 'comfyanonymous:master' into master 2025-06-09 21:10:00 +03:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
patientx
b4d015f5f3
Merge branch 'comfyanonymous:master' into master 2025-06-08 21:21:41 +03:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 (#8464) 2025-06-08 14:15:34 -04:00
patientx
156aedd995
Merge branch 'comfyanonymous:master' into master 2025-06-07 19:30:45 +03:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. (#8453) 2025-06-07 10:01:15 -04:00
patientx
d28b4525b3
Merge branch 'comfyanonymous:master' into master 2025-06-06 17:10:34 +03:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 (#8442) 2025-06-06 09:43:27 -04:00
patientx
67fc8e3325
Merge branch 'comfyanonymous:master' into master 2025-06-06 01:42:57 +03:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. (#8429) 2025-06-05 10:07:17 -04:00
patientx
9aeff135b2
Update zluda.py 2025-06-02 02:55:19 +03:00
patientx
803f82189a
Merge branch 'comfyanonymous:master' into master 2025-06-01 17:44:48 +03:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. (#8373) 2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. (#8358) 2025-05-31 20:00:20 -04:00
patientx
d74ffb792a
Merge branch 'comfyanonymous:master' into master 2025-05-31 01:55:42 +03:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma (#8279) 2025-05-30 18:09:54 -04:00
patientx
07b8d211e6
Merge branch 'comfyanonymous:master' into master 2025-05-30 23:48:15 +03:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348) 2025-05-30 15:41:02 -04:00
patientx
c74742444d
Merge branch 'comfyanonymous:master' into master 2025-05-29 18:29:06 +03:00
comfyanonymous
f2289a1f59
Delete useless file. (#8327) 2025-05-29 08:29:37 -04:00
patientx
46a997fb23
Merge branch 'comfyanonymous:master' into master 2025-05-29 10:56:01 +03:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. (#8321) 2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. (#8318) 2025-05-28 18:47:15 -04:00
patientx
5b5165371e
Merge branch 'comfyanonymous:master' into master 2025-05-28 01:06:44 +03:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. (#8307) 2025-05-27 15:09:05 -04:00
patientx
8609a6dced
Merge branch 'comfyanonymous:master' into master 2025-05-27 01:03:35 +03:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. (#8294) 2025-05-26 16:39:27 -04:00
patientx
bbcb33ea72
Merge branch 'comfyanonymous:master' into master 2025-05-26 16:26:39 +03:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 (#8282) 2025-05-26 04:29:25 -04:00
patientx
48bbdd0842
Merge branch 'comfyanonymous:master' into master 2025-05-25 15:38:51 +03:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. (#8272) 2025-05-25 05:28:11 -04:00
patientx
9790aaac7b
Merge branch 'comfyanonymous:master' into master 2025-05-24 14:00:54 +03:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
patientx
3b69a08c08
Merge branch 'comfyanonymous:master' into master 2025-05-24 04:06:28 +03:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
patientx
3e49c3e2ff
Merge branch 'comfyanonymous:master' into master 2025-05-24 00:01:56 +03:00
drhead
30b2eb8a93
create arange on-device (#8255) 2025-05-23 16:15:06 -04:00
patientx
c653935b37
Merge branch 'comfyanonymous:master' into master 2025-05-23 13:57:11 +03:00
LuXuxue
dc4958db54
add some architectures to utils.py 2025-05-23 13:54:03 +08:00
comfyanonymous
f85c08df06
Make VACE conditionings stackable. (#8240) 2025-05-22 19:22:26 -04:00
comfyanonymous
87f9130778
Revert "This doesn't seem to be needed on chroma. (#8209)" (#8210)
This reverts commit 7e84bf5373.
2025-05-20 05:39:55 -04:00
comfyanonymous
7e84bf5373
This doesn't seem to be needed on chroma. (#8209) 2025-05-20 05:29:23 -04:00
patientx
acf83b60f4
Merge branch 'comfyanonymous:master' into master 2025-05-18 11:37:17 +03:00
comfyanonymous
aee2908d03
Remove useless log. (#8166) 2025-05-17 06:27:34 -04:00
patientx
de4e3dd19a
Merge branch 'comfyanonymous:master' into master 2025-05-16 02:43:05 +03:00
comfyanonymous
1c2d45d2b5
Fix typo in last PR. (#8144)
More robust model detection for future proofing.
2025-05-15 19:02:19 -04:00
George0726
c820ef950d
Add Wan-FUN Camera Control models and Add WanCameraImageToVideo node (#8013)
* support wan camera models

* fix by ruff check

* change camera_condition type; make camera_condition optional

* support camera trajectory nodes

* fix camera direction

---------

Co-authored-by: Qirui Sun <sunqr0667@126.com>
2025-05-15 19:00:43 -04:00
patientx
1cf25c6980
Add files via upload 2025-05-15 13:55:15 +03:00
patientx
44cac886c4
Create quant_per_block.py 2025-05-15 13:54:47 +03:00
patientx
01aae8eddc
Create fwd_prefill.py 2025-05-15 13:52:35 +03:00
patientx
39279bda97
Merge branch 'comfyanonymous:master' into master 2025-05-14 14:15:49 +03:00
Christian Byrne
98ff01e148
Display progress and result URL directly on API nodes (#8102)
* [Luma] Print download URL of successful task result directly on nodes (#177)

[Veo] Print download URL of successful task result directly on nodes (#184)

[Recraft] Print download URL of successful task result directly on nodes (#183)

[Pixverse] Print download URL of successful task result directly on nodes (#182)

[Kling] Print download URL of successful task result directly on nodes (#181)

[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179)

[Docs] Link to docs in `API_NODE` class property type annotation comment (#178)

[Ideogram] Print download URL of successful task result directly on nodes (#176)

[Kling] Print download URL of successful task result directly on nodes (#181)

[Veo] Print download URL of successful task result directly on nodes (#184)

[Recraft] Print download URL of successful task result directly on nodes (#183)

[Pixverse] Print download URL of successful task result directly on nodes (#182)

[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179)

[Docs] Link to docs in `API_NODE` class property type annotation comment (#178)

[Luma] Print download URL of successful task result directly on nodes (#177)

[Ideogram] Print download URL of successful task result directly on nodes (#176)

Show output URL and progress text on Pika nodes (#168)

[BFL] Print download URL of successful task result directly on nodes (#175)

[OpenAI ] Print download URL of successful task result directly on nodes (#174)

* fix ruff errors

* fix 3.10 syntax error
2025-05-14 00:33:18 -04:00
patientx
c5ecbe5b30
Merge branch 'comfyanonymous:master' into master 2025-05-14 00:08:44 +03:00
comfyanonymous
4a9014e201
Hunyuan Custom initial untested implementation. (#8101) 2025-05-13 15:53:47 -04:00
patientx
3609a0cf35
Update zluda.py 2025-05-13 18:57:09 +03:00
patientx
8ced886a3d
Update zluda.py 2025-05-13 16:23:24 +03:00
patientx
6fc773788f
updated how to handle comfyui package updates 2025-05-13 16:22:44 +03:00
patientx
f0127d6326
Merge branch 'comfyanonymous:master' into master 2025-05-13 16:00:32 +03:00
patientx
d934498f60
Update rmsnorm.py 2025-05-13 16:00:26 +03:00
comfyanonymous
a814f2e8cc
Fix issue with old pytorch RMSNorm. (#8095) 2025-05-13 07:54:28 -04:00
comfyanonymous
481732a0ed
Support official ACE Step loras. (#8094) 2025-05-13 07:32:16 -04:00
patientx
d15ce39530
Merge branch 'comfyanonymous:master' into master 2025-05-13 01:06:19 +03:00
patientx
05d6c876ad
Update zluda.py 2025-05-13 01:06:08 +03:00
patientx
6aa9077f3c
Update zluda.py 2025-05-13 01:05:49 +03:00
comfyanonymous
640c47e7de
Fix torch warning about deprecated function. (#8075)
Drop support for torch versions below 2.2 on the audio VAEs.
2025-05-12 14:32:01 -04:00
patientx
cd7eb9bd36
"Boolean value of Tensor with more than one value is ambiguous" fix 2025-05-11 20:39:42 +03:00
patientx
8abcc4ec4f
Update zluda.py 2025-05-11 16:51:43 +03:00
patientx
3f58f50df3
Update zluda.py 2025-05-11 16:51:06 +03:00
patientx
6098615cda
Merge branch 'comfyanonymous:master' into master 2025-05-11 15:10:21 +03:00
comfyanonymous
577de83ca9
ACE VAE works in fp16. (#8055) 2025-05-11 04:58:00 -04:00
patientx
515bf966b2
Merge branch 'comfyanonymous:master' into master 2025-05-10 14:57:16 +03:00
comfyanonymous
d42613686f
Fix issue with fp8 ops on some models. (#8045)
_scaled_mm errors when an input is non contiguous.
2025-05-10 07:52:56 -04:00
patientx
2862921cca
Merge branch 'comfyanonymous:master' into master 2025-05-10 14:17:24 +03:00
Pam
1b3bf0a5da
Fix res_multistep_ancestral sampler (#8030) 2025-05-09 20:14:13 -04:00
patientx
56461e2e90
Merge branch 'comfyanonymous:master' into master 2025-05-09 22:40:55 +03:00
blepping
42da274717
Use normal ComfyUI attention in ACE-Steps model (#8023)
* Use normal ComfyUI attention in ACE-Steps model

* Let optimized_attention handle output reshape for ACE
2025-05-09 13:51:02 -04:00
patientx
4672537a99
Merge branch 'comfyanonymous:master' into master 2025-05-09 12:39:13 +03:00
comfyanonymous
8ab15c863c
Add --mmap-torch-files to enable use of mmap when loading ckpt/pt (#8021) 2025-05-09 04:52:47 -04:00
patientx
8a3424f354
Update zluda.py 2025-05-09 00:17:15 +03:00
patientx
a59deda664
Update zluda.py 2025-05-09 00:16:37 +03:00
patientx
9444d18408
Update zluda.py 2025-05-08 23:44:22 +03:00
patientx
184c8521cb
Update zluda.py 2025-05-08 23:44:05 +03:00
patientx
6c2370a577
Update zluda.py 2025-05-08 20:18:26 +03:00
patientx
0436293d99
Update zluda.py 2025-05-08 20:17:58 +03:00
patientx
81a16eefbc
Update zluda.py 2025-05-08 19:57:59 +03:00
patientx
f9671afff0
Update zluda.py 2025-05-08 19:57:28 +03:00
patientx
a963fba415
Update utils.py 2025-05-08 15:33:45 +03:00
patientx
e973632f11
Merge branch 'comfyanonymous:master' into master 2025-05-08 14:52:54 +03:00
comfyanonymous
a692c3cca4
Make ACE VAE tiling work. (#8004) 2025-05-08 07:25:45 -04:00
comfyanonymous
5d3cc85e13
Make japanese hiragana and katakana characters work with ACE. (#7997) 2025-05-08 03:32:36 -04:00
comfyanonymous
c7c025b8d1
Adjust memory estimation code for ACE VAE. (#7990) 2025-05-08 01:22:23 -04:00
comfyanonymous
fd08e39588
Make torchaudio not a hard requirement. (#7987)
Some platforms can't install it apparently so if it's not there it should
only break models that actually use it.
2025-05-07 21:37:12 -04:00
comfyanonymous
56b6ee6754
Detection code to make ltxv models without config work. (#7986) 2025-05-07 21:28:24 -04:00
patientx
a0656dad3a
Merge branch 'comfyanonymous:master' into master 2025-05-08 02:56:22 +03:00
comfyanonymous
cc33cd3422
Experimental lyrics strength for ACE. (#7984) 2025-05-07 19:22:07 -04:00
patientx
326e041c11
Merge branch 'comfyanonymous:master' into master 2025-05-07 17:44:36 +03:00
comfyanonymous
16417b40d9
Initial ACE-Step model implementation. (#7972) 2025-05-07 08:33:34 -04:00
patientx
5edeb23260
Merge branch 'comfyanonymous:master' into master 2025-05-06 19:08:07 +03:00
comfyanonymous
271c9c5b9e
Better mem estimation for the LTXV 13B model. (#7963) 2025-05-06 09:52:37 -04:00
comfyanonymous
a4e679765e
Change chroma to use Flux shift. (#7961) 2025-05-06 09:00:01 -04:00
patientx
bfc4e29f20
Merge branch 'comfyanonymous:master' into master 2025-05-06 14:02:47 +03:00
comfyanonymous
094e9ef126
Add a way to disable api nodes: --disable-api-nodes (#7960) 2025-05-06 04:53:53 -04:00
Jedrzej Kosinski
1271c4ef9d
More API Nodes (#7956)
* Add Ideogram generate node.

* Add staging api.

* Add API_NODE and common error for missing auth token (#5)

* Add Minimax Video Generation + Async Task queue polling example (#6)

* [Minimax] Show video preview and embed workflow in ouput (#7)

* Remove uv.lock

* Remove polling operations.

* Revert "Remove polling operations."

* Update stubs.

* Added Ideogram and Minimax back in.

* Added initial BFL Flux 1.1 [pro] Ultra node (#11)

* Add --comfy-api-base launch arg (#13)

* Add instructions for staging development. (#14)

* remove validation to make it easier to run against LAN copies of the API

* Manually add BFL polling status response schema (#15)

* Add function for uploading files. (#18)

* Add Luma nodes (#16)

* Refactor util functions (#20)

* Add VIDEO type (#21)

* Add rest of Luma node functionality (#19)

* Fix image_luma_ref not working (#28)

* [Bug] Remove duplicated option T2V-01 in MinimaxTextToVideoNode (#31)

* Add utils to map from pydantic model fields to comfy node inputs (#30)

* add veo2, bump av req (#32)

* Add Recraft nodes (#29)

* Add Kling Nodes (#12)

* Add Camera Concepts (luma_concepts) to Luma Video nodes (#33)

* Add Runway nodes (#17)

* Convert Minimax node to use VIDEO output type (#34)

* Standard `CATEGORY` system for api nodes (#35)

* Set `Content-Type` header when uploading files (#36)

* add better error propagation to veo2 (#37)

* Add Realistic Image and Logo Raster styles for Recraft v3 (#38)

* Fix runway image upload and progress polling (#39)

* Fix image upload for Luma: only include `Content-Type` header field if it's set explicitly (#40)

* Moved Luma nodes to nodes_luma.py (#47)

* Moved Recraft nodes to nodes_recraft.py (#48)

* Add Pixverse nodes (#46)

* Move and fix BFL nodes to node_bfl.py (#49)

* Move and edit Minimax node to nodes_minimax.py (#50)

* Add Minimax Image to Video node + Cleanup (#51)

* Add Recraft Text to Vector node, add Save SVG node to handle its output (#53)

* Added pixverse_template support to Pixverse Text to Video node (#54)

* Added Recraft Controls + Recraft Color RGB nodes (#57)

* split remaining nodes out of nodes_api, make utility lib, refactor ideogram (#61)

* Add types and doctstrings to utils file (#64)

* Fix: `PollingOperation` progress bar update progress by absolute value (#65)

* Use common download function in kling nodes module (#67)

* Fix: Luma video nodes in `api nodes/image` category (#68)

* Set request type explicitly (#66)

* Add `control_after_generate` to all seed inputs (#69)

* Fix bug: deleting `Content-Type` when property does not exist (#73)

* Add preview to Save SVG node (#74)

* change default poll interval (#76), rework veo2

* Add Pixverse and updated Kling types (#75)

* Added Pixverse Image to VIdeo node (#77)

* Add Pixverse Transition Video node (#79)

* Proper ray-1-6 support as fix has been applied in backend (#80)

* Added Recraft Style - Infinite Style Library node (#82)

* add ideogram v3 (#83)

* [Kling] Split Camera Control config to its own node (#81)

* Add Pika i2v and t2v nodes (#52)

* Temporary Fix for Runway (#87)

* Added Stability Stable Image Ultra node (#86)

* Remove Runway nodes (#88)

* Fix: Prompt text can't be validated in Kling nodes when using primitive nodes (#90)

* Fix: typo in node name "Stabiliy" => "Stability" (#91)

* Add String (Multiline) node (#93)

* Update Pika Duration and Resolution options (#94)

* Change base branch to master. Not main. (#95)

* Fix UploadRequest file_name param (#98)

* Removed Infinite Style Library until later (#99)

* fix ideogram style types (#100)

* fix multi image return (#101)

* add metadata saving to SVG (#102)

* Bump templates version to include API node template workflows (#104)

* Fix: `download_url_to_video_output` return type (#103)

* fix 4o generation bug (#106)

* Serve SVG files directly (#107)

* Add a bunch of nodes, 3 ready to use, the rest waiting for endpoint support (#108)

* Revert "Serve SVG files directly" (#111)

* Expose 4 remaining Recraft nodes (#112)

* [Kling] Add `Duration` and `Video ID` outputs (#105)

* Fix: datamodel-codegen sets string#binary type to non-existent `bytes_aliased` variable  (#114)

* Fix: Dall-e 2 not setting request content-type dynamically (#113)

* Default request timeout: one hour. (#116)

* Add Kling nodes: camera control, start-end frame, lip-sync, video extend (#115)

* Add 8 nodes - 4 BFL, 4 Stability (#117)

* Fix error for Recraft ImageToImage error for nonexistent random_seed param (#118)

* Add remaining Pika nodes (#119)

* Make controls input work for Recraft Image to Image node (#120)

* Use upstream PR: Support saving Comfy VIDEO type to buffer (#123)

* Use Upstream PR: "Fix: Error creating video when sliced audio tensor chunks are non-c-contiguous" (#127)

* Improve audio upload utils (#128)

* Fix: Nested `AnyUrl` in request model cannot be serialized (Kling, Runway) (#129)

* Show errors and API output URLs to the user (change log levels) (#131)

* Fix: Luma I2I fails when weight is <=0.01 (#132)

* Change category of `LumaConcepts` node from image to video (#133)

* Fix: `image.shape` accessed before `image` is null-checked (#134)

* Apply small fixes and most prompt validation (if needed to avoid API error) (#135)

* Node name/category modifications (#140)

* Add back Recraft Style - Infinite Style Library node (#141)

* Fixed Kling: Check attributes of pydantic types. (#144)

* Bump `comfyui-workflow-templates` version (#142)

* [Kling] Print response data when error validating response (#146)

* Fix: error validating Kling image response, trying to use `"key" in` on Pydantic class instance (#147)

* [Kling] Fix: Correct/verify supported subset of input combos in Kling nodes (#149)

* [Kling] Fix typo in node description (#150)

* [Kling] Fix: CFG min/max not being enforced (#151)

* Rebase launch-rebase (private) on prep-branch (public copy of master) (#153)

* Bump templates version (#154)

* Fix: Kling image gen nodes don't return entire batch when `n` > 1 (#152)

* Remove pixverse_template from PixVerse Transition Video node (#155)

* Invert image_weight value on Luma Image to Image node (#156)

* Invert and resize mask for Ideogram V3 node to match masking conventions (#158)

* [Kling] Fix: image generation nodes not returning Tuple (#159)

* [Bug] [Kling] Fix Kling camera control (#161)

* Kling Image Gen v2 + improve node descriptions for Flux/OpenAI (#160)

* [Kling] Don't return video_id from dual effect video (#162)

* Bump frontend to 1.18.8 (#163)

* Use 3.9 compat syntax (#164)

* Use Python 3.10

* add example env var

* Update templates to 0.1.11

* Bump frontend to 1.18.9

---------

Co-authored-by: Robin Huang <robin.j.huang@gmail.com>
Co-authored-by: Christian Byrne <cbyrne@comfy.org>
Co-authored-by: thot experiment <94414189+thot-experiment@users.noreply.github.com>
2025-05-06 04:23:00 -04:00
patientx
080b5d0df4
Update rmsnorm.py (Ensuring eps is not None) 2025-05-05 01:46:04 +03:00
patientx
634c398fd5
Merge branch 'comfyanonymous:master' into master 2025-05-04 16:14:44 +03:00
comfyanonymous
80a44b97f5
Change lumina to native RMSNorm. (#7935) 2025-05-04 06:39:23 -04:00
comfyanonymous
9187a09483
Change cosmos and hydit models to use the native RMSNorm. (#7934) 2025-05-04 06:26:20 -04:00
patientx
be4b829886
Merge branch 'comfyanonymous:master' into master 2025-05-04 03:46:17 +03:00
comfyanonymous
3041e5c354
Switch mochi and wan modes to use pytorch RMSNorm. (#7925)
* Switch genmo model to native RMSNorm.

* Switch WAN to native RMSNorm.
2025-05-03 19:07:55 -04:00
patientx
f98aad15d5
Merge branch 'comfyanonymous:master' into master 2025-05-02 23:58:33 +03:00
patientx
1068783ff8
Update zluda.py 2025-05-02 20:21:10 +03:00
patientx
3b827e0a59
Update zluda.py 2025-05-02 20:20:41 +03:00
Kohaku-Blueleaf
2ab9618732
Fix the bugs in OFT/BOFT moule (#7909)
* Correct calculate_weight and load for OFT

* Correct calculate_weight and loading for BOFT
2025-05-02 13:12:37 -04:00
patientx
bc1fa6e013
Create zluda.py (custom zluda for miopen-triton) 2025-05-02 17:44:26 +03:00
patientx
073ff4a11d
Create placeholder 2025-05-01 23:05:14 +03:00
patientx
aad001bfbe
Add files via upload 2025-05-01 23:04:46 +03:00
patientx
caa3597bb7
Create placeholder 2025-05-01 23:03:51 +03:00
patientx
085925ddf7
Update zluda.py 2025-05-01 20:49:28 +03:00
patientx
da173f67b3
Merge branch 'comfyanonymous:master' into master 2025-05-01 17:23:12 +03:00
comfyanonymous
aa9d759df3
Switch ltxv to use the pytorch RMSNorm. (#7897) 2025-05-01 06:33:42 -04:00
comfyanonymous
08ff5fa08a Cleanup chroma PR. 2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG (#7355)
* Upload files for Chroma Implementation

* Remove trailing whitespace

* trim more trailing whitespace..oops

* remove unused imports

* Add supported_inference_dtypes

* Set min_length to 0 and remove attention_mask=True

* Set min_length to 1

* get_mdulations added from blepping and minor changes

* Add lora conversion if statement in lora.py

* Update supported_models.py

* update model_base.py

* add uptream commits

* set modelType.FLOW, will cause beta scheduler to work properly

* Adjust memory usage factor and remove unnecessary code

* fix mistake

* reduce code duplication

* remove unused imports

* refactor for upstream sync

* sync chroma-support with upstream via syncbranch patch

* Update sd.py

* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master 2025-04-30 13:46:06 +03:00
comfyanonymous
dbc726f80c
Better vace memory estimation. (#7875) 2025-04-29 20:42:00 -04:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. (#7873) 2025-04-29 20:28:52 -04:00
patientx
64709ce55c
Merge branch 'comfyanonymous:master' into master 2025-04-29 14:18:45 +03:00
guill
68f0d35296
Add support for VIDEO as a built-in type (#7844)
* Add basic support for videos as types

This PR adds support for VIDEO as first-class types. In order to avoid
unnecessary costs, VIDEO outputs must implement the `VideoInput` ABC,
but their implementation details can vary. Included are two
implementations of this type which can be returned by other nodes:

* `VideoFromFile` - Created with either a path on disk (as a string) or
  a `io.BytesIO` containing the contents of a file in a supported format
  (like .mp4). This implementation won't actually load the video unless
  necessary. It will also avoid re-encoding when saving if possible.
* `VideoFromComponents` - Created from an image tensor and an optional
  audio tensor.

Currently, only h264 encoded videos in .mp4 containers are supported for
saving, but the plan is to add additional encodings/containers in the
near future (particularly .webm).

* Add optimization to avoid parsing entire video

* Improve type declarations to reduce warnings

* Make sure bytesIO objects can be read many times

* Fix a potential issue when saving long videos

* Fix incorrect type annotation

* Add a `LoadVideo` node to make testing easier

* Refactor new types out of the base comfy folder

I've created a new `comfy_api` top-level module. The intention is that
anything within this folder would be covered by semver-style versioning
that would allow custom nodes to rely on them not introducing breaking
changes.

* Fix linting issue
2025-04-29 05:58:00 -04:00
patientx
0aeb958ea5
Merge branch 'comfyanonymous:master' into master 2025-04-29 01:49:37 +03:00
comfyanonymous
83d04717b6
Support HiDream E1 model. (#7857) 2025-04-28 15:01:15 -04:00
chaObserv
c15909bb62
CFG++ for gradient estimation sampler (#7809) 2025-04-28 13:51:35 -04:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. (#7856) 2025-04-28 13:07:21 -04:00
Pam
30159a7fe6
Save v pred zsnr metadata (#7840) 2025-04-28 13:03:21 -04:00
patientx
6244dfa1e1
Merge branch 'comfyanonymous:master' into master 2025-04-28 01:13:53 +03:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. (#7833) 2025-04-27 05:38:11 -04:00
patientx
ec34f4da57
Merge branch 'comfyanonymous:master' into master 2025-04-27 05:01:55 +03:00
comfyanonymous
ac10a0d69e
Make loras work with --async-offload (#7824) 2025-04-26 19:56:22 -04:00
patientx
9cc8e2e1d0
Merge branch 'comfyanonymous:master' into master 2025-04-26 23:32:14 +03:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. (#7820)
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
patientx
d6238ed3f0
Merge branch 'comfyanonymous:master' into master 2025-04-26 15:31:39 +03:00
comfyanonymous
23e39f2ba7
Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
patientx
57a5e6e7ae
Merge branch 'comfyanonymous:master' into master 2025-04-26 01:23:01 +03:00
AustinMroz
78992c4b25
[NodeDef] Add documentation on widgetType (#7768)
* [NodeDef] Add documentation on widgetType

* Document required version for widgetType
2025-04-25 13:35:07 -04:00
patientx
224f72f90f
Merge branch 'comfyanonymous:master' into master 2025-04-25 14:11:50 +03:00
comfyanonymous
f935d42d8e Support SimpleTuner lycoris lora format for HiDream. 2025-04-25 03:11:14 -04:00
patientx
1d9338b4b9
Merge branch 'comfyanonymous:master' into master 2025-04-24 14:50:59 +03:00
thot experiment
e2eed9eb9b
throw away alpha channel in clip vision preprocessor (#7769)
saves users having to explicitly discard the channel
2025-04-23 21:28:36 -04:00
patientx
d8a75b86e9
Update zluda.py 2025-04-24 03:11:46 +03:00
patientx
91d7a9d234
Merge branch 'comfyanonymous:master' into master 2025-04-23 12:41:27 +03:00
comfyanonymous
552615235d
Fix for dino lowvram. (#7748) 2025-04-23 04:12:52 -04:00
Robin Huang
0738e4ea5d
[API nodes] Add backbone for supporting api nodes in ComfyUI (#7745)
* Add Ideogram generate node.

* Add staging api.

* COMFY_API_NODE_NAME node property

* switch to boolean flag and use original node name for id

* add optional to type

* Add API_NODE and common error for missing auth token (#5)

* Add Minimax Video Generation + Async Task queue polling example (#6)

* [Minimax] Show video preview and embed workflow in ouput (#7)

* [API Nodes] Send empty request body instead of empty dictionary. (#8)

* Fixed: removed function from rebase.

* Add pydantic.

* Remove uv.lock

* Remove polling operations.

* Update stubs workflow.

* Remove polling comments.

* Update stubs.

* Use pydantic v2.

* Use pydantic v2.

* Add basic OpenAITextToImage node

* Add.

* convert image to tensor.

* Improve types.

* Ruff.

* Push tests.

* Handle multi-form data.

- Don't set content-type for multi-part/form
- Use data field instead of JSON

* Change to api.comfy.org

* Handle error code 409.

* Remove nodes.

---------

Co-authored-by: bymyself <cbyrne@comfy.org>
Co-authored-by: Yoland Y <4950057+yoland68@users.noreply.github.com>
2025-04-23 02:18:08 -04:00
patientx
a397c3aeb3
Merge branch 'comfyanonymous:master' into master 2025-04-22 13:28:29 +03:00
comfyanonymous
2d6805ce57
Add option for using fp8_e8m0fnu for model weights. (#7733)
Seems to break every model I have tried but worth testing?
2025-04-22 06:17:38 -04:00
patientx
c09fa908f5
Merge branch 'comfyanonymous:master' into master 2025-04-22 12:13:56 +03:00
Kohaku-Blueleaf
a8f63c0d5b
Support dora_scale on both axis (#7727) 2025-04-22 05:01:27 -04:00
Kohaku-Blueleaf
966c43ce26
Add OFT/BOFT algorithm in weight adapter (#7725) 2025-04-22 04:59:47 -04:00
comfyanonymous
3ab231f01f
Fix issue with WAN VACE implementation. (#7724) 2025-04-21 23:36:12 -04:00
Kohaku-Blueleaf
1f3fba2af5
Unified Weight Adapter system for better maintainability and future feature of Lora system (#7540) 2025-04-21 20:15:32 -04:00
comfyanonymous
5d0d4ee98a
Add strength control for vace. (#7717) 2025-04-21 19:36:20 -04:00
patientx
32ec658779
Merge branch 'comfyanonymous:master' into master 2025-04-21 23:16:54 +03:00
filtered
5d51794607
Add node type hint for socketless option (#7714)
* Add node type hint for socketless option

* nit - Doc
2025-04-21 16:13:00 -04:00
comfyanonymous
ce22f687cc
Support for WAN VACE preview model. (#7711)
* Support for WAN VACE preview model.

* Remove print.
2025-04-21 14:40:29 -04:00
patientx
d926896b55
Merge branch 'comfyanonymous:master' into master 2025-04-21 00:00:31 +03:00
comfyanonymous
2c735c13b4
Slightly better fix for #7687 2025-04-20 11:33:27 -04:00
patientx
0143b19681
Merge branch 'comfyanonymous:master' into master 2025-04-20 15:05:36 +03:00
comfyanonymous
fd27494441 Use empty t5 of size 128 for hidream, seems to give closer results. 2025-04-19 19:49:40 -04:00
power88
f43e1d7f41
Hidream: Allow loading hidream text encoders in CLIPLoader and DualCLIPLoader (#7676)
* Hidream: Allow partial loading text encoders

* reformat code for ruff check.
2025-04-19 19:47:30 -04:00
patientx
cd77ded0c7
Merge branch 'comfyanonymous:master' into master 2025-04-19 23:20:10 +03:00
comfyanonymous
636d4bfb89 Fix hard crash when the spiece tokenizer path is bad. 2025-04-19 15:55:43 -04:00
patientx
682a70bcf9
Update zluda.py 2025-04-19 00:53:59 +03:00
patientx
71ac9830ab
updated comfyui-frontend version 2025-04-17 22:15:11 +03:00
patientx
95773a0045
updated "comfyui-frontend version" - added "comfyui-workflow-templates" 2025-04-17 22:13:36 +03:00
patientx
d9211bbf1e
Merge branch 'comfyanonymous:master' into master 2025-04-17 20:51:12 +03:00
comfyanonymous
dbcfd092a2 Set default context_img_len to 257 2025-04-17 12:42:34 -04:00
comfyanonymous
c14429940f Support loading WAN FLF model. 2025-04-17 12:04:48 -04:00
patientx
6bbe3b19d4
Merge branch 'comfyanonymous:master' into master 2025-04-17 14:51:08 +03:00
comfyanonymous
0d720e4367 Don't hardcode length of context_img in wan code. 2025-04-17 06:25:39 -04:00
patientx
7950c510e7
Merge branch 'comfyanonymous:master' into master 2025-04-17 10:10:03 +03:00
comfyanonymous
1fc00ba4b6 Make hidream work with any latent resolution. 2025-04-16 18:34:14 -04:00
patientx
d05e9b8298
Merge branch 'comfyanonymous:master' into master 2025-04-17 01:22:11 +03:00
comfyanonymous
9899d187b1 Limit T5 to 128 tokens for HiDream: #7620 2025-04-16 18:07:55 -04:00
comfyanonymous
f00f340a56 Reuse code from flux model. 2025-04-16 17:43:55 -04:00
patientx
503292b3c0
Merge branch 'comfyanonymous:master' into master 2025-04-16 23:26:10 +03:00
Chenlei Hu
cce1d9145e
[Type] Mark input options NotRequired (#7614) 2025-04-16 15:41:00 -04:00
patientx
765224ad78
Merge branch 'comfyanonymous:master' into master 2025-04-16 12:10:01 +03:00
comfyanonymous
b4dc03ad76 Fix issue on old torch. 2025-04-16 04:53:56 -04:00
patientx
eee802e685
Update zluda.py 2025-04-16 00:53:18 +03:00
patientx
ad2fa1a675
Merge branch 'comfyanonymous:master' into master 2025-04-16 00:50:26 +03:00
comfyanonymous
9ad792f927 Basic support for hidream i1 model. 2025-04-15 17:35:05 -04:00
patientx
ab0860b96a
Merge branch 'comfyanonymous:master' into master 2025-04-15 21:37:22 +03:00
comfyanonymous
6fc5dbd52a Cleanup. 2025-04-15 12:13:28 -04:00
patientx
af58afcae8
Merge branch 'comfyanonymous:master' into master 2025-04-15 18:23:31 +03:00
comfyanonymous
3e8155f7a3 More flexible long clip support.
Add clip g long clip support.

Text encoder refactor.

Support llama models with different vocab sizes.
2025-04-15 10:32:21 -04:00
patientx
09491fecbd
Merge branch 'comfyanonymous:master' into master 2025-04-15 14:41:15 +03:00
comfyanonymous
8a438115fb add RMSNorm to comfy.ops 2025-04-14 18:00:33 -04:00
patientx
205bdad97e
fix frontend package version 2025-04-13 17:12:14 +03:00
patientx
b6d5765f0f
Added onnxruntime patching 2025-04-13 17:11:13 +03:00
patientx
f1513262d3
Merge branch 'comfyanonymous:master' into master 2025-04-13 03:21:00 +03:00
chaObserv
e51d9ba5fc
Add SEEDS (stage 2 & 3 DP) sampler (#7580)
* Add seeds stage 2 & 3 (DP) sampler

* Change the name to SEEDS in comment
2025-04-12 18:36:08 -04:00
catboxanon
1714a4c158
Add CublasOps support (#7574)
* CublasOps support

* Guard CublasOps behind --fast arg
2025-04-12 18:29:15 -04:00
patientx
81b9fcfe4d
Merge branch 'comfyanonymous:master' into master 2025-04-11 19:55:56 +03:00
Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines (#7509)
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.

* remove test code

* tidy code
2025-04-11 06:55:51 -04:00
patientx
163780343e
updated comfyui-frontend version 2025-04-11 00:32:22 +03:00
patientx
27f850ff8a
Merge branch 'comfyanonymous:master' into master 2025-04-11 00:23:38 +03:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput (#7551)
* Deprecate InputTypeOptions.defaultInput

* nit

* nit
2025-04-10 06:57:06 -04:00
patientx
ae8488fdc7
Merge branch 'comfyanonymous:master' into master 2025-04-09 19:53:21 +03:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape (#7500) 2025-04-09 09:43:35 -04:00
patientx
2bf016391a
Merge branch 'comfyanonymous:master' into master 2025-04-07 13:01:39 +03:00
comfyanonymous
70d7242e57 Support the wan fun reward loras. 2025-04-07 05:01:47 -04:00
patientx
137ab318e1
Merge branch 'comfyanonymous:master' into master 2025-04-05 16:30:54 +03:00
comfyanonymous
3bfe4e5276 Support 512 siglip model. 2025-04-05 07:01:01 -04:00
patientx
c90f1b7948
Merge branch 'comfyanonymous:master' into master 2025-04-05 13:39:32 +03:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models

* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55 Disable partial offloading of audio VAE. 2025-04-04 21:24:56 -04:00
patientx
4541842b9a
Merge branch 'comfyanonymous:master' into master 2025-04-03 03:15:32 +03:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
patientx
1040220970
Merge branch 'comfyanonymous:master' into master 2025-04-01 22:56:01 +03:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
patientx
b7d9be6864
Merge branch 'comfyanonymous:master' into master 2025-03-30 14:17:07 +03:00
comfyanonymous
a3100c8452 Remove useless code. 2025-03-29 20:12:56 -04:00
patientx
f02045a45d
Merge branch 'comfyanonymous:master' into master 2025-03-28 16:58:48 +03:00
comfyanonymous
2d17d8910c Don't error if wan concat image has extra channels. 2025-03-28 08:49:29 -04:00
patientx
01f2da55f9
Update zluda.py 2025-03-28 10:33:39 +03:00
patientx
9d401fe602
Merge branch 'comfyanonymous:master' into master 2025-03-27 22:52:53 +03:00
comfyanonymous
0a1f8869c9 Add WanFunInpaintToVideo node for the Wan fun inpaint models. 2025-03-27 11:13:27 -04:00
patientx
39cf3cdc32
Merge branch 'comfyanonymous:master' into master 2025-03-27 12:35:23 +03:00
comfyanonymous
3661c833bc Support the WAN 2.1 fun control models.
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master 2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
patientx
87e937ecd6
Merge branch 'comfyanonymous:master' into master 2025-03-23 18:32:46 +03:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
patientx
64008960e9
Update zluda.py 2025-03-22 13:53:47 +03:00
patientx
a6db9cc07a
Merge branch 'comfyanonymous:master' into master 2025-03-22 13:52:43 +03:00
comfyanonymous
d9fa9d307f Automatically set the right sampling type for lotus. 2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation (#7125)
* draft pass at a native comfy implementation of Lotus-D depth and normal est

* fix model_sampling kludges

* fix ruff

---------

Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
patientx
cf9ad7aae3
Update zluda.py 2025-03-21 12:21:23 +03:00
patientx
28a4de830c
Merge branch 'comfyanonymous:master' into master 2025-03-20 14:23:30 +03:00
comfyanonymous
3872b43d4b A few fixes for the hunyuan3d models. 2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7 Fix orientation of hunyuan 3d model. 2025-03-19 19:55:24 -04:00