Commit Graph

339 Commits

Author SHA1 Message Date
patientx
4541842b9a
Merge branch 'comfyanonymous:master' into master 2025-04-03 03:15:32 +03:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
patientx
1040220970
Merge branch 'comfyanonymous:master' into master 2025-04-01 22:56:01 +03:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master 2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
patientx
eaf40b802d
Merge branch 'comfyanonymous:master' into master 2025-03-14 12:00:25 +03:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93 Fix. 2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed Print mac version. 2025-03-13 10:05:40 -04:00
patientx
c469113159
Merge branch 'comfyanonymous:master' into master 2025-03-09 14:09:50 +03:00
comfyanonymous
0952569493 Fix stable cascade VAE on some lowvram machines. 2025-03-08 20:24:04 -05:00
patientx
0c4bebf5fb
Merge branch 'comfyanonymous:master' into master 2025-03-01 14:59:20 +03:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options (#7024) 2025-03-01 02:37:35 -05:00
patientx
af43425ab5
Update model_management.py 2025-02-28 16:37:55 +03:00
patientx
1871a594ba
Merge branch 'comfyanonymous:master' into master 2025-02-28 11:47:19 +03:00
comfyanonymous
cf0b549d48 --fast now takes a number as argument to indicate how fast you want it.
The idea is that you can indicate how much quality vs speed you want.

At the moment:

--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.

--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. 2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. 2025-02-27 16:39:57 -05:00
patientx
c4fb9f2a63
Merge branch 'comfyanonymous:master' into master 2025-02-27 13:06:17 +03:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master 2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master 2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation 2025-02-23 04:45:54 -05:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master 2025-02-22 14:42:22 +03:00
comfyanonymous
072db3bea6 Assume the mac black image bug won't be fixed before v16. 2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a Latest mac still has the black image bug. 2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master 2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7 Let all model memory be offloaded on nvidia. 2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master 2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581 Apparently directml supports fp16. 2025-02-20 09:30:24 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master 2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2 Fix typo.
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master 2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052 Improve AMD arch detection. 2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3 bf16 manual cast works on old AMD. 2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master 2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d Refactor torch version checks to be more future proof. 2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master 2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4 Disable bf16 on AMD GPUs that don't support it. 2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master 2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py 2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py 2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention 2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master 2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d Better memory estimation for ROCm that support mem efficient attention.
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master 2025-02-12 15:50:23 +03:00