Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines ( #7509 )
...
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.
* remove test code
* tidy code
2025-04-11 06:55:51 -04:00
patientx
163780343e
updated comfyui-frontend version
2025-04-11 00:32:22 +03:00
patientx
27f850ff8a
Merge branch 'comfyanonymous:master' into master
2025-04-11 00:23:38 +03:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput ( #7551 )
...
* Deprecate InputTypeOptions.defaultInput
* nit
* nit
2025-04-10 06:57:06 -04:00
patientx
ae8488fdc7
Merge branch 'comfyanonymous:master' into master
2025-04-09 19:53:21 +03:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape ( #7500 )
2025-04-09 09:43:35 -04:00
patientx
2bf016391a
Merge branch 'comfyanonymous:master' into master
2025-04-07 13:01:39 +03:00
comfyanonymous
70d7242e57
Support the wan fun reward loras.
2025-04-07 05:01:47 -04:00
patientx
137ab318e1
Merge branch 'comfyanonymous:master' into master
2025-04-05 16:30:54 +03:00
comfyanonymous
3bfe4e5276
Support 512 siglip model.
2025-04-05 07:01:01 -04:00
patientx
c90f1b7948
Merge branch 'comfyanonymous:master' into master
2025-04-05 13:39:32 +03:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models ( #7482 )
...
* Add activations_shape info in UNet models
* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55
Disable partial offloading of audio VAE.
2025-04-04 21:24:56 -04:00
patientx
4541842b9a
Merge branch 'comfyanonymous:master' into master
2025-04-03 03:15:32 +03:00
BiologicalExplosion
2222cf67fd
MLU memory optimization ( #7470 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
patientx
1040220970
Merge branch 'comfyanonymous:master' into master
2025-04-01 22:56:01 +03:00
BVH
301e26b131
Add option to store TE in bf16 ( #7461 )
2025-04-01 13:48:53 -04:00
patientx
b7d9be6864
Merge branch 'comfyanonymous:master' into master
2025-03-30 14:17:07 +03:00
comfyanonymous
a3100c8452
Remove useless code.
2025-03-29 20:12:56 -04:00
patientx
f02045a45d
Merge branch 'comfyanonymous:master' into master
2025-03-28 16:58:48 +03:00
comfyanonymous
2d17d8910c
Don't error if wan concat image has extra channels.
2025-03-28 08:49:29 -04:00
patientx
01f2da55f9
Update zluda.py
2025-03-28 10:33:39 +03:00
patientx
9d401fe602
Merge branch 'comfyanonymous:master' into master
2025-03-27 22:52:53 +03:00
comfyanonymous
0a1f8869c9
Add WanFunInpaintToVideo node for the Wan fun inpaint models.
2025-03-27 11:13:27 -04:00
patientx
39cf3cdc32
Merge branch 'comfyanonymous:master' into master
2025-03-27 12:35:23 +03:00
comfyanonymous
3661c833bc
Support the WAN 2.1 fun control models.
...
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master
2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1
Support more float8 types.
2025-03-25 05:23:49 -04:00
patientx
87e937ecd6
Merge branch 'comfyanonymous:master' into master
2025-03-23 18:32:46 +03:00
comfyanonymous
e471c726e5
Fallback to pytorch attention if sage attention fails.
2025-03-22 15:45:56 -04:00
patientx
64008960e9
Update zluda.py
2025-03-22 13:53:47 +03:00
patientx
a6db9cc07a
Merge branch 'comfyanonymous:master' into master
2025-03-22 13:52:43 +03:00
comfyanonymous
d9fa9d307f
Automatically set the right sampling type for lotus.
2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation ( #7125 )
...
* draft pass at a native comfy implementation of Lotus-D depth and normal est
* fix model_sampling kludges
* fix ruff
---------
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
patientx
cf9ad7aae3
Update zluda.py
2025-03-21 12:21:23 +03:00
patientx
28a4de830c
Merge branch 'comfyanonymous:master' into master
2025-03-20 14:23:30 +03:00
comfyanonymous
3872b43d4b
A few fixes for the hunyuan3d models.
2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7
Fix orientation of hunyuan 3d model.
2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab
Initial Hunyuan3Dv2 implementation.
...
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
patientx
4115126a82
Update zluda.py
2025-03-19 12:15:38 +03:00
patientx
34b87d4542
Merge branch 'comfyanonymous:master' into master
2025-03-18 17:14:19 +03:00
comfyanonymous
3b19fc76e3
Allow disabling pe in flux code for some other models.
2025-03-18 05:09:25 -04:00
patientx
c853489349
Merge branch 'comfyanonymous:master' into master
2025-03-18 01:12:05 +03:00
comfyanonymous
50614f1b79
Fix regression with clip vision.
2025-03-17 13:56:11 -04:00
patientx
97dd7b8b52
Merge branch 'comfyanonymous:master' into master
2025-03-17 13:10:34 +03:00
comfyanonymous
6dc7b0bfe3
Add support for giant dinov2 image encoder.
2025-03-17 05:53:54 -04:00
patientx
30e3177e00
Merge branch 'comfyanonymous:master' into master
2025-03-16 21:26:31 +03:00
comfyanonymous
e8e990d6b8
Cleanup code.
2025-03-16 06:29:12 -04:00
patientx
dcc409faa4
Merge branch 'comfyanonymous:master' into master
2025-03-16 13:05:56 +03:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload ( #7253 )
...
* Call unpatch_hooks at the start of ModelPatcher.partially_unload
* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00
chaObserv
fd5297131f
Guard the edge cases of noise term in er_sde ( #7265 )
2025-03-16 06:02:25 -04:00
patientx
bc664959a4
Merge branch 'comfyanonymous:master' into master
2025-03-15 17:11:51 +03:00
comfyanonymous
55a1b09ddc
Allow loading diffusion model files with the "Load Checkpoint" node.
2025-03-15 08:27:49 -04:00
comfyanonymous
3c3988df45
Show a better error message if the VAE is invalid.
2025-03-15 08:26:36 -04:00
comfyanonymous
a2448fc527
Remove useless code.
2025-03-14 18:10:37 -04:00
patientx
977dbafcf6
update the comfyui frontend package version
2025-03-15 00:07:04 +03:00
patientx
ccd9fe7f3c
Merge branch 'comfyanonymous:master' into master
2025-03-14 23:47:51 +03:00
comfyanonymous
6a0daa79b6
Make the SkipLayerGuidanceDIT node work on WAN.
2025-03-14 10:55:19 -04:00
FeepingCreature
9c98c6358b
Tolerate missing @torch.library.custom_op ( #7234 )
...
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
patientx
bcea9b9a0c
Update attention.py to keep older torch versions running
2025-03-14 16:45:32 +03:00
patientx
eaf40b802d
Merge branch 'comfyanonymous:master' into master
2025-03-14 12:00:25 +03:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. ( #7223 )
...
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93
Fix.
2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed
Print mac version.
2025-03-13 10:05:40 -04:00
patientx
b57a624e7c
Merge branch 'comfyanonymous:master' into master
2025-03-13 03:13:15 +03:00
Chenlei Hu
9b6cd9b874
[NodeDef] Add documentation on multi_select input option ( #7212 )
2025-03-12 17:29:39 -04:00
chaObserv
3fc688aebd
Ensure the extra_args in dpmpp sde series ( #7204 )
2025-03-12 17:28:59 -04:00
patientx
4a632a54a4
Merge branch 'comfyanonymous:master' into master
2025-03-12 11:28:56 +03:00
chaObserv
01015bff16
Add er_sde sampler ( #7187 )
2025-03-12 02:42:37 -04:00
patientx
cf490d92b3
Merge branch 'comfyanonymous:master' into master
2025-03-11 01:28:12 +03:00
comfyanonymous
ca8efab79f
Support control loras on Wan.
2025-03-10 17:23:13 -04:00
patientx
c469113159
Merge branch 'comfyanonymous:master' into master
2025-03-09 14:09:50 +03:00
comfyanonymous
9aac21f894
Fix issues with new hunyuan img2vid model and bumb version to v0.3.26
2025-03-09 05:07:22 -04:00
Jedrzej Kosinski
528d1b3563
When cached_hook_patches contain weights for hooks, only use hook_backup for unused keys ( #7067 )
2025-03-09 04:26:31 -04:00
comfyanonymous
7395b0c0d1
Support new hunyuan video i2v model.
...
Use the new "v2 (replace)" guidance type in HunyuanImageToVideo and set
image_interleave to 4 on the "Text Encode Hunyuan Video" node.
2025-03-08 20:34:47 -05:00
comfyanonymous
0952569493
Fix stable cascade VAE on some lowvram machines.
2025-03-08 20:24:04 -05:00
patientx
b8ad97eca2
Update zluda.py
2025-03-08 14:57:09 +03:00
patientx
0bf4f88dea
Update zluda.py
2025-03-08 14:37:15 +03:00
patientx
2ce9177547
Update zluda.py
2025-03-08 14:33:15 +03:00
patientx
0f7de5c588
Update zluda.py
2025-03-08 14:23:08 +03:00
patientx
d385e286a1
Update zluda.py
2025-03-08 14:22:23 +03:00
patientx
09156b577c
Merge branch 'comfyanonymous:master' into master
2025-03-08 14:21:36 +03:00
comfyanonymous
be4e760648
Add an image_interleave option to the Hunyuan image to video encode node.
...
See the tooltip for what it does.
2025-03-07 19:56:26 -05:00
patientx
8847252eec
Merge branch 'comfyanonymous:master' into master
2025-03-07 16:14:21 +03:00
comfyanonymous
11b1f27cb1
Set WAN default compute dtype to fp16.
2025-03-07 04:52:36 -05:00
comfyanonymous
70e15fd743
No need for scale_input when fp8 matrix mult is disabled.
2025-03-07 04:49:20 -05:00
comfyanonymous
e1474150de
Support fp8_scaled diffusion models that don't use fp8 matrix mult.
2025-03-07 04:39:21 -05:00
patientx
7d16d06c3b
Merge branch 'comfyanonymous:master' into master
2025-03-06 23:42:01 +03:00
JettHu
e62d72e8ca
Typo in node_typing.py ( #7092 )
2025-03-06 15:24:04 -05:00
patientx
03825eaa28
Merge branch 'comfyanonymous:master' into master
2025-03-06 22:23:13 +03:00
comfyanonymous
dfa36e6855
Fix some things breaking when embeddings fail to apply.
2025-03-06 13:31:55 -05:00
patientx
44c060b3de
Merge branch 'comfyanonymous:master' into master
2025-03-06 12:16:31 +03:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
comfyanonymous
0bef826a98
Support llava clip vision model.
2025-03-06 00:24:43 -05:00
comfyanonymous
85ef295069
Make applying embeddings more efficient.
...
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
patientx
199e91029d
Merge branch 'comfyanonymous:master' into master
2025-03-05 23:54:19 +03:00
Chenlei Hu
5d84607bf3
Add type hint for FileLocator ( #6968 )
...
* Add type hint for FileLocator
* nit
2025-03-05 15:35:26 -05:00
Silver
c1909f350f
Better argument handling of front-end-root ( #7043 )
...
* Better argument handling of front-end-root
Improves handling of front-end-root launch argument. Several instances where users have set it and ComfyUI launches as normal and completely disregards the launch arg which doesn't make sense. Better to indicate to user that something is incorrect.
* Removed unused import
There was no real reason to use "Optional" typing in ther front-end-root argument.
2025-03-05 15:34:22 -05:00
Chenlei Hu
52b3469606
[NodeDef] Explicitly add control_after_generate to seed/noise_seed ( #7059 )
...
* [NodeDef] Explicitly add control_after_generate to seed/noise_seed
* Update comfy/comfy_types/node_typing.py
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
---------
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
2025-03-05 15:33:23 -05:00
patientx
c36a942c12
Merge branch 'comfyanonymous:master' into master
2025-03-05 19:11:25 +03:00