patientx
b8ad97eca2
Update zluda.py
2025-03-08 14:57:09 +03:00
patientx
0bf4f88dea
Update zluda.py
2025-03-08 14:37:15 +03:00
patientx
2ce9177547
Update zluda.py
2025-03-08 14:33:15 +03:00
patientx
0f7de5c588
Update zluda.py
2025-03-08 14:23:08 +03:00
patientx
d385e286a1
Update zluda.py
2025-03-08 14:22:23 +03:00
patientx
09156b577c
Merge branch 'comfyanonymous:master' into master
2025-03-08 14:21:36 +03:00
comfyanonymous
be4e760648
Add an image_interleave option to the Hunyuan image to video encode node.
...
See the tooltip for what it does.
2025-03-07 19:56:26 -05:00
patientx
8847252eec
Merge branch 'comfyanonymous:master' into master
2025-03-07 16:14:21 +03:00
comfyanonymous
11b1f27cb1
Set WAN default compute dtype to fp16.
2025-03-07 04:52:36 -05:00
comfyanonymous
70e15fd743
No need for scale_input when fp8 matrix mult is disabled.
2025-03-07 04:49:20 -05:00
comfyanonymous
e1474150de
Support fp8_scaled diffusion models that don't use fp8 matrix mult.
2025-03-07 04:39:21 -05:00
patientx
7d16d06c3b
Merge branch 'comfyanonymous:master' into master
2025-03-06 23:42:01 +03:00
JettHu
e62d72e8ca
Typo in node_typing.py ( #7092 )
2025-03-06 15:24:04 -05:00
patientx
03825eaa28
Merge branch 'comfyanonymous:master' into master
2025-03-06 22:23:13 +03:00
comfyanonymous
dfa36e6855
Fix some things breaking when embeddings fail to apply.
2025-03-06 13:31:55 -05:00
patientx
44c060b3de
Merge branch 'comfyanonymous:master' into master
2025-03-06 12:16:31 +03:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
comfyanonymous
0bef826a98
Support llava clip vision model.
2025-03-06 00:24:43 -05:00
comfyanonymous
85ef295069
Make applying embeddings more efficient.
...
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
patientx
199e91029d
Merge branch 'comfyanonymous:master' into master
2025-03-05 23:54:19 +03:00
Chenlei Hu
5d84607bf3
Add type hint for FileLocator ( #6968 )
...
* Add type hint for FileLocator
* nit
2025-03-05 15:35:26 -05:00
Silver
c1909f350f
Better argument handling of front-end-root ( #7043 )
...
* Better argument handling of front-end-root
Improves handling of front-end-root launch argument. Several instances where users have set it and ComfyUI launches as normal and completely disregards the launch arg which doesn't make sense. Better to indicate to user that something is incorrect.
* Removed unused import
There was no real reason to use "Optional" typing in ther front-end-root argument.
2025-03-05 15:34:22 -05:00
Chenlei Hu
52b3469606
[NodeDef] Explicitly add control_after_generate to seed/noise_seed ( #7059 )
...
* [NodeDef] Explicitly add control_after_generate to seed/noise_seed
* Update comfy/comfy_types/node_typing.py
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
---------
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
2025-03-05 15:33:23 -05:00
patientx
c36a942c12
Merge branch 'comfyanonymous:master' into master
2025-03-05 19:11:25 +03:00
comfyanonymous
369b079ff6
Fix lowvram issue with ltxv vae.
2025-03-05 05:26:08 -05:00
comfyanonymous
9c9a7f012a
Adjust ltxv memory factor.
2025-03-05 05:16:05 -05:00
patientx
93001919fa
Merge branch 'comfyanonymous:master' into master
2025-03-05 11:17:58 +03:00
comfyanonymous
93fedd92fe
Support LTXV 0.9.5.
...
Credits: Lightricks team.
2025-03-05 00:13:49 -05:00
patientx
586eabb95d
Merge branch 'comfyanonymous:master' into master
2025-03-04 19:10:54 +03:00
comfyanonymous
65042f7d39
Make it easier to set a custom template for hunyuan video.
2025-03-04 09:26:05 -05:00
patientx
cacb8da101
Merge branch 'comfyanonymous:master' into master
2025-03-04 13:17:37 +03:00
comfyanonymous
7c7c70c400
Refactor skyreels i2v code.
2025-03-04 00:15:45 -05:00
patientx
f12bcde392
Merge branch 'comfyanonymous:master' into master
2025-03-03 15:10:56 +03:00
comfyanonymous
f86c724ef2
Temporal area composition.
...
New ConditioningSetAreaPercentageVideo node.
2025-03-03 06:50:31 -05:00
patientx
6c34d0a58a
Update zluda.py
2025-03-02 22:43:11 +03:00
patientx
7763bf823e
Merge branch 'comfyanonymous:master' into master
2025-03-02 17:18:12 +03:00
comfyanonymous
9af6320ec9
Make 2d area composition nodes work on video models.
2025-03-02 08:19:16 -05:00
patientx
0c4bebf5fb
Merge branch 'comfyanonymous:master' into master
2025-03-01 14:59:20 +03:00
comfyanonymous
4dc6709307
Rename argument in last commit and document the options.
2025-03-01 02:43:49 -05:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
patientx
c235a51d82
Update zluda.py
2025-02-28 16:41:56 +03:00
patientx
af43425ab5
Update model_management.py
2025-02-28 16:37:55 +03:00
patientx
1871a594ba
Merge branch 'comfyanonymous:master' into master
2025-02-28 11:47:19 +03:00
comfyanonymous
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
patientx
cc65ca4c42
Merge branch 'comfyanonymous:master' into master
2025-02-28 00:39:40 +03:00
comfyanonymous
f4dac8ab6f
Wan code small cleanup.
2025-02-27 07:22:42 -05:00
patientx
c4fb9f2a63
Merge branch 'comfyanonymous:master' into master
2025-02-27 13:06:17 +03:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
comfyanonymous
3ea3bc8546
Fix wan issues when prompt length is long.
2025-02-26 20:34:02 -05:00
patientx
b04a1f4127
Merge branch 'comfyanonymous:master' into master
2025-02-27 01:24:29 +03:00
comfyanonymous
0270a0b41c
Reduce artifacts on Wan by doing the patch embedding in fp32.
2025-02-26 16:59:26 -05:00
patientx
4f968c3c56
Merge branch 'comfyanonymous:master' into master
2025-02-26 17:11:50 +03:00
comfyanonymous
c37f15f98e
Add fast preview support for Wan models.
2025-02-26 08:56:23 -05:00
patientx
1193f3fbb1
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:47:39 +03:00
comfyanonymous
4bca7367f3
Don't try to use clip_fea on t2v model.
2025-02-26 08:38:09 -05:00
patientx
debf69185c
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:00:33 +03:00
comfyanonymous
b6fefe686b
Better wan memory estimation.
2025-02-26 07:51:22 -05:00
patientx
583f140eda
Merge branch 'comfyanonymous:master' into master
2025-02-26 13:26:25 +03:00
comfyanonymous
fa62287f1f
More code reuse in wan.
...
Fix bug when changing the compute dtype on wan.
2025-02-26 05:22:29 -05:00
patientx
743996a1f7
Merge branch 'comfyanonymous:master' into master
2025-02-26 12:56:06 +03:00
comfyanonymous
0844998db3
Slightly better wan i2v mask implementation.
2025-02-26 03:49:50 -05:00
comfyanonymous
4ced06b879
WIP support for Wan I2V model.
2025-02-26 01:49:43 -05:00
patientx
1e91ff59a1
Merge branch 'comfyanonymous:master' into master
2025-02-26 09:24:15 +03:00
comfyanonymous
cb06e9669b
Wan seems to work with fp16.
2025-02-25 21:37:12 -05:00
patientx
6e894524e2
Merge branch 'comfyanonymous:master' into master
2025-02-26 04:14:10 +03:00
comfyanonymous
9a66bb972d
Make wan work with all latent resolutions.
...
Cleanup some code.
2025-02-25 19:56:04 -05:00
patientx
4269943ac3
Merge branch 'comfyanonymous:master' into master
2025-02-26 03:13:47 +03:00
comfyanonymous
ea0f939df3
Fix issue with wan and other attention implementations.
2025-02-25 19:13:39 -05:00
comfyanonymous
f37551c1d2
Change wan rope implementation to the flux one.
...
Should be more compatible.
2025-02-25 19:11:14 -05:00
patientx
879db7bdfc
Merge branch 'comfyanonymous:master' into master
2025-02-26 02:07:25 +03:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
patientx
6cf0fdcc3c
Merge branch 'comfyanonymous:master' into master
2025-02-25 17:12:14 +03:00
comfyanonymous
f40076096e
Cleanup some lumina te code.
2025-02-25 04:10:26 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master
2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master
2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
patientx
c15fe75f7b
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic" ,error fix
2025-02-22 15:44:20 +03:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master
2025-02-22 14:42:22 +03:00
comfyanonymous
aff16532d4
Remove some useless code.
2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master
2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master
2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs ( #6891 )
...
* Fix link pointing to non-exisiting docs
The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.
* Update node_typing.py
2025-02-20 07:07:07 -05:00
patientx
a813e39d54
Merge branch 'comfyanonymous:master' into master
2025-02-19 16:19:26 +03:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. ( #6877 )
...
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.
This addresses a complain in the comments of acc152b674 .
2025-02-19 07:14:45 -05:00
patientx
4e6a5fc548
Merge branch 'comfyanonymous:master' into master
2025-02-19 13:52:34 +03:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master
2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master
2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master
2025-02-17 12:42:02 +03:00