Commit Graph

1787 Commits

Author SHA1 Message Date
comfyanonymous
29a70ca101 Support HunyuanVideo image to video model. 2025-03-06 03:07:15 -05:00
comfyanonymous
0bef826a98 Support llava clip vision model. 2025-03-06 00:24:43 -05:00
comfyanonymous
85ef295069 Make applying embeddings more efficient.
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
patientx
199e91029d
Merge branch 'comfyanonymous:master' into master 2025-03-05 23:54:19 +03:00
Chenlei Hu
5d84607bf3
Add type hint for FileLocator (#6968)
* Add type hint for FileLocator

* nit
2025-03-05 15:35:26 -05:00
Silver
c1909f350f
Better argument handling of front-end-root (#7043)
* Better argument handling of front-end-root

Improves handling of front-end-root launch argument. Several instances where users have set it and ComfyUI launches as normal and completely disregards the launch arg which doesn't make sense. Better to indicate to user that something is incorrect.

* Removed unused import

There was no real reason to use "Optional" typing in ther front-end-root argument.
2025-03-05 15:34:22 -05:00
Chenlei Hu
52b3469606
[NodeDef] Explicitly add control_after_generate to seed/noise_seed (#7059)
* [NodeDef] Explicitly add control_after_generate to seed/noise_seed

* Update comfy/comfy_types/node_typing.py

Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>

---------

Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
2025-03-05 15:33:23 -05:00
patientx
c36a942c12
Merge branch 'comfyanonymous:master' into master 2025-03-05 19:11:25 +03:00
comfyanonymous
369b079ff6 Fix lowvram issue with ltxv vae. 2025-03-05 05:26:08 -05:00
comfyanonymous
9c9a7f012a Adjust ltxv memory factor. 2025-03-05 05:16:05 -05:00
patientx
93001919fa
Merge branch 'comfyanonymous:master' into master 2025-03-05 11:17:58 +03:00
comfyanonymous
93fedd92fe Support LTXV 0.9.5.
Credits: Lightricks team.
2025-03-05 00:13:49 -05:00
patientx
586eabb95d
Merge branch 'comfyanonymous:master' into master 2025-03-04 19:10:54 +03:00
comfyanonymous
65042f7d39 Make it easier to set a custom template for hunyuan video. 2025-03-04 09:26:05 -05:00
patientx
cacb8da101
Merge branch 'comfyanonymous:master' into master 2025-03-04 13:17:37 +03:00
comfyanonymous
7c7c70c400 Refactor skyreels i2v code. 2025-03-04 00:15:45 -05:00
patientx
f12bcde392
Merge branch 'comfyanonymous:master' into master 2025-03-03 15:10:56 +03:00
comfyanonymous
f86c724ef2 Temporal area composition.
New ConditioningSetAreaPercentageVideo node.
2025-03-03 06:50:31 -05:00
patientx
6c34d0a58a
Update zluda.py 2025-03-02 22:43:11 +03:00
patientx
7763bf823e
Merge branch 'comfyanonymous:master' into master 2025-03-02 17:18:12 +03:00
comfyanonymous
9af6320ec9 Make 2d area composition nodes work on video models. 2025-03-02 08:19:16 -05:00
patientx
0c4bebf5fb
Merge branch 'comfyanonymous:master' into master 2025-03-01 14:59:20 +03:00
comfyanonymous
4dc6709307 Rename argument in last commit and document the options. 2025-03-01 02:43:49 -05:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options (#7024) 2025-03-01 02:37:35 -05:00
patientx
c235a51d82
Update zluda.py 2025-02-28 16:41:56 +03:00
patientx
af43425ab5
Update model_management.py 2025-02-28 16:37:55 +03:00
patientx
1871a594ba
Merge branch 'comfyanonymous:master' into master 2025-02-28 11:47:19 +03:00
comfyanonymous
cf0b549d48 --fast now takes a number as argument to indicate how fast you want it.
The idea is that you can indicate how much quality vs speed you want.

At the moment:

--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.

--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. 2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. 2025-02-27 16:39:57 -05:00
patientx
cc65ca4c42
Merge branch 'comfyanonymous:master' into master 2025-02-28 00:39:40 +03:00
comfyanonymous
f4dac8ab6f Wan code small cleanup. 2025-02-27 07:22:42 -05:00
patientx
c4fb9f2a63
Merge branch 'comfyanonymous:master' into master 2025-02-27 13:06:17 +03:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
comfyanonymous
3ea3bc8546 Fix wan issues when prompt length is long. 2025-02-26 20:34:02 -05:00
patientx
b04a1f4127
Merge branch 'comfyanonymous:master' into master 2025-02-27 01:24:29 +03:00
comfyanonymous
0270a0b41c Reduce artifacts on Wan by doing the patch embedding in fp32. 2025-02-26 16:59:26 -05:00
patientx
4f968c3c56
Merge branch 'comfyanonymous:master' into master 2025-02-26 17:11:50 +03:00
comfyanonymous
c37f15f98e Add fast preview support for Wan models. 2025-02-26 08:56:23 -05:00
patientx
1193f3fbb1
Merge branch 'comfyanonymous:master' into master 2025-02-26 16:47:39 +03:00
comfyanonymous
4bca7367f3 Don't try to use clip_fea on t2v model. 2025-02-26 08:38:09 -05:00
patientx
debf69185c
Merge branch 'comfyanonymous:master' into master 2025-02-26 16:00:33 +03:00
comfyanonymous
b6fefe686b Better wan memory estimation. 2025-02-26 07:51:22 -05:00
patientx
583f140eda
Merge branch 'comfyanonymous:master' into master 2025-02-26 13:26:25 +03:00
comfyanonymous
fa62287f1f More code reuse in wan.
Fix bug when changing the compute dtype on wan.
2025-02-26 05:22:29 -05:00
patientx
743996a1f7
Merge branch 'comfyanonymous:master' into master 2025-02-26 12:56:06 +03:00
comfyanonymous
0844998db3 Slightly better wan i2v mask implementation. 2025-02-26 03:49:50 -05:00
comfyanonymous
4ced06b879 WIP support for Wan I2V model. 2025-02-26 01:49:43 -05:00
patientx
1e91ff59a1
Merge branch 'comfyanonymous:master' into master 2025-02-26 09:24:15 +03:00
comfyanonymous
cb06e9669b Wan seems to work with fp16. 2025-02-25 21:37:12 -05:00
patientx
6e894524e2
Merge branch 'comfyanonymous:master' into master 2025-02-26 04:14:10 +03:00
comfyanonymous
9a66bb972d Make wan work with all latent resolutions.
Cleanup some code.
2025-02-25 19:56:04 -05:00
patientx
4269943ac3
Merge branch 'comfyanonymous:master' into master 2025-02-26 03:13:47 +03:00
comfyanonymous
ea0f939df3 Fix issue with wan and other attention implementations. 2025-02-25 19:13:39 -05:00
comfyanonymous
f37551c1d2 Change wan rope implementation to the flux one.
Should be more compatible.
2025-02-25 19:11:14 -05:00
patientx
879db7bdfc
Merge branch 'comfyanonymous:master' into master 2025-02-26 02:07:25 +03:00
comfyanonymous
63023011b9 WIP support for Wan t2v model. 2025-02-25 17:20:35 -05:00
patientx
6cf0fdcc3c
Merge branch 'comfyanonymous:master' into master 2025-02-25 17:12:14 +03:00
comfyanonymous
f40076096e Cleanup some lumina te code. 2025-02-25 04:10:26 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master 2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master 2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation 2025-02-23 04:45:54 -05:00
patientx
c15fe75f7b
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic" ,error fix 2025-02-22 15:44:20 +03:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master 2025-02-22 14:42:22 +03:00
comfyanonymous
aff16532d4 Remove some useless code. 2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6 Assume the mac black image bug won't be fixed before v16. 2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a Latest mac still has the black image bug. 2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master 2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7 Let all model memory be offloaded on nvidia. 2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master 2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581 Apparently directml supports fp16. 2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs (#6891)
* Fix link pointing to non-exisiting docs

The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.

* Update node_typing.py
2025-02-20 07:07:07 -05:00
patientx
a813e39d54
Merge branch 'comfyanonymous:master' into master 2025-02-19 16:19:26 +03:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. (#6877)
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.

This addresses a complain in the comments of acc152b674.
2025-02-19 07:14:45 -05:00
patientx
4e6a5fc548
Merge branch 'comfyanonymous:master' into master 2025-02-19 13:52:34 +03:00
bymyself
afc85cdeb6
Add Load Image Output node (#6790)
* add LoadImageOutput node

* add route for input/output/temp files

* update node_typing.py

* use literal type for image_folder field

* mark node as beta
2025-02-18 17:53:01 -05:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V (#6862)
* Support SkyReels-V1-Hunyuan-I2V

* VAE scaling

* Fix T2V

oops

* Proper latent scaling
2025-02-18 17:06:54 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master 2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2 Fix typo.
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master 2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052 Improve AMD arch detection. 2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3 bf16 manual cast works on old AMD. 2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master 2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d Refactor torch version checks to be more future proof. 2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master 2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4 Disable bf16 on AMD GPUs that don't support it. 2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master 2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py 2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py 2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
4d66aa9709
Merge branch 'comfyanonymous:master' into master 2025-02-14 11:00:12 +03:00
comfyanonymous
019c7029ea Add a way to set a different compute dtype for the model at runtime.
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention 2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master 2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d Better memory estimation for ROCm that support mem efficient attention.
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master 2025-02-12 15:50:23 +03:00
comfyanonymous
1d5d6586f3 Fix ruff. 2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err (#6794) 2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b Add add_weight_wrapper function to model patcher.
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c Cleanup. 2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML (#6542)
* Fix for running via DirectML

Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs

* fix formating

* update casual mask calculation
2025-02-11 17:11:32 -05:00
patientx
196fc385e1
Merge branch 'comfyanonymous:master' into master 2025-02-11 16:38:17 +03:00
comfyanonymous
af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. 2025-02-11 08:33:09 -05:00
patientx
2a0bc66fed
Merge branch 'comfyanonymous:master' into master 2025-02-10 15:41:15 +03:00
comfyanonymous
4027466c80 Make lumina model work with any latent resolution. 2025-02-10 00:24:20 -05:00
patientx
9561b03cc7
Merge branch 'comfyanonymous:master' into master 2025-02-09 15:33:03 +03:00
comfyanonymous
095d867147 Remove useless function. 2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers (#6731) 2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555 Make error more clear to user. 2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast (#6453)
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
patientx
9a9da027b2
Merge branch 'comfyanonymous:master' into master 2025-02-07 12:02:36 +03:00
comfyanonymous
079eccc92a Don't compress http response by default.
Remove argument to disable it.

Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
patientx
4a1e3ee925
Merge branch 'comfyanonymous:master' into master 2025-02-06 14:33:29 +03:00
comfyanonymous
14880e6dba Remove some useless code. 2025-02-06 05:00:37 -05:00
patientx
93c0fc3446
Update supported_models.py 2025-02-05 23:12:38 +03:00
patientx
f8c2ab631a
Merge branch 'comfyanonymous:master' into master 2025-02-05 23:10:21 +03:00
comfyanonymous
37cd448529 Set the shift for Lumina back to 6. 2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301 Upcasting rope to fp32 seems to make no difference in this model. 2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5 Use regular numbers for rope in lumina model. 2025-02-05 04:17:25 -05:00
patientx
059629a5fb
Merge branch 'comfyanonymous:master' into master 2025-02-05 10:15:14 +03:00
comfyanonymous
a57d635c5f Fix lumina 2 batches. 2025-02-04 21:48:11 -05:00
patientx
523b5352b8
Merge branch 'comfyanonymous:master' into master 2025-02-04 18:16:21 +03:00
comfyanonymous
8ac2dddeed Lower the default shift of lumina to reduce artifacts. 2025-02-04 06:50:37 -05:00
patientx
b8ab0f2091
Merge branch 'comfyanonymous:master' into master 2025-02-04 12:32:22 +03:00
comfyanonymous
3e880ac709 Fix on python 3.9 2025-02-04 04:20:56 -05:00
comfyanonymous
e5ea112a90 Support Lumina 2 model. 2025-02-04 04:16:30 -05:00
patientx
bf081a208a
Merge branch 'comfyanonymous:master' into master 2025-02-02 18:27:25 +03:00
comfyanonymous
44e19a28d3 Use maximum negative value instead of -inf for masks in text encoders.
This is probably more correct.
2025-02-02 09:46:00 -05:00
patientx
a5eb46e557
Merge branch 'comfyanonymous:master' into master 2025-02-02 17:30:16 +03:00
Dr.Lt.Data
0a0df5f136
better guide message for sageattention (#6634) 2025-02-02 09:26:47 -05:00
KarryCharon
24d6871e47
add disable-compres-response-body cli args; add compress middleware; (#6672) 2025-02-02 09:24:55 -05:00
patientx
2ccb7dd301
Merge branch 'comfyanonymous:master' into master 2025-02-01 15:33:58 +03:00
comfyanonymous
9e1d301129 Only use stable cascade lora format with cascade model. 2025-02-01 06:35:22 -05:00
patientx
97f5a7d844
Merge branch 'comfyanonymous:master' into master 2025-01-30 23:03:44 +03:00
comfyanonymous
8d8dc9a262 Allow batch of different sigmas when noise scaling. 2025-01-30 06:49:52 -05:00
patientx
98cf486504
Merge branch 'comfyanonymous:master' into master 2025-01-29 18:53:00 +03:00
filtered
222f48c0f2
Allow changing folder_paths.base_path via command line argument. (#6600)
* Reimpl. CLI arg directly inside folder_paths.

* Update tests to use CLI arg mocking.

* Revert last-minute refactor.

* Fix test state polution.
2025-01-29 08:06:28 -05:00
patientx
a5c944f482
Merge branch 'comfyanonymous:master' into master 2025-01-28 19:20:13 +03:00
comfyanonymous
13fd4d6e45 More friendly error messages for corrupted safetensors files. 2025-01-28 09:41:09 -05:00
patientx
f77fea7fc6
Merge branch 'comfyanonymous:master' into master 2025-01-27 23:45:19 +03:00
comfyanonymous
255edf2246 Lower minimum ratio of loaded weights on Nvidia. 2025-01-27 05:26:51 -05:00
patientx
1442e34d9e
Merge branch 'comfyanonymous:master' into master 2025-01-26 15:27:37 +03:00
comfyanonymous
67feb05299 Remove redundant code. 2025-01-25 19:04:53 -05:00
patientx
73433fffa0
Merge branch 'comfyanonymous:master' into master 2025-01-24 15:16:37 +03:00
comfyanonymous
14ca5f5a10 Remove useless code. 2025-01-24 06:15:54 -05:00
patientx
076d037620
Merge branch 'comfyanonymous:master' into master 2025-01-23 13:56:49 +03:00
comfyanonymous
96e2a45193 Remove useless code. 2025-01-23 05:56:23 -05:00
Chenlei Hu
dfa2b6d129
Remove unused function lcm in conds.py (#6572) 2025-01-23 05:54:09 -05:00