patientx
1e91ff59a1
Merge branch 'comfyanonymous:master' into master
2025-02-26 09:24:15 +03:00
comfyanonymous
cb06e9669b
Wan seems to work with fp16.
2025-02-25 21:37:12 -05:00
patientx
6e894524e2
Merge branch 'comfyanonymous:master' into master
2025-02-26 04:14:10 +03:00
comfyanonymous
9a66bb972d
Make wan work with all latent resolutions.
...
Cleanup some code.
2025-02-25 19:56:04 -05:00
patientx
4269943ac3
Merge branch 'comfyanonymous:master' into master
2025-02-26 03:13:47 +03:00
comfyanonymous
ea0f939df3
Fix issue with wan and other attention implementations.
2025-02-25 19:13:39 -05:00
comfyanonymous
f37551c1d2
Change wan rope implementation to the flux one.
...
Should be more compatible.
2025-02-25 19:11:14 -05:00
patientx
879db7bdfc
Merge branch 'comfyanonymous:master' into master
2025-02-26 02:07:25 +03:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
patientx
6cf0fdcc3c
Merge branch 'comfyanonymous:master' into master
2025-02-25 17:12:14 +03:00
comfyanonymous
f40076096e
Cleanup some lumina te code.
2025-02-25 04:10:26 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master
2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master
2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
patientx
c15fe75f7b
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic" ,error fix
2025-02-22 15:44:20 +03:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master
2025-02-22 14:42:22 +03:00
comfyanonymous
aff16532d4
Remove some useless code.
2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master
2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master
2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs ( #6891 )
...
* Fix link pointing to non-exisiting docs
The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.
* Update node_typing.py
2025-02-20 07:07:07 -05:00
patientx
a813e39d54
Merge branch 'comfyanonymous:master' into master
2025-02-19 16:19:26 +03:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. ( #6877 )
...
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.
This addresses a complain in the comments of acc152b674 .
2025-02-19 07:14:45 -05:00
patientx
4e6a5fc548
Merge branch 'comfyanonymous:master' into master
2025-02-19 13:52:34 +03:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master
2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master
2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master
2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master
2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master
2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py
2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py
2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
4d66aa9709
Merge branch 'comfyanonymous:master' into master
2025-02-14 11:00:12 +03:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention
2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master
2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master
2025-02-12 15:50:23 +03:00