patientx
d8ca8134c3
Merge branch 'comfyanonymous:master' into master
2025-07-29 11:56:59 +03:00
comfyanonymous
7d593baf91
Extra reserved vram on large cards on windows. ( #9093 )
2025-07-29 04:07:45 -04:00
patientx
970b7fb84f
Merge branch 'comfyanonymous:master' into master
2025-07-24 22:30:55 +03:00
comfyanonymous
69cb57b342
Print xpu device name. ( #9035 )
2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX ( #8585 )
...
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
patientx
30539d0d13
Merge branch 'comfyanonymous:master' into master
2025-07-24 13:59:09 +03:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 ( #9029 )
2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. ( #9027 )
2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. ( #9026 )
2025-07-23 18:10:59 -04:00
patientx
58f3250106
Merge branch 'comfyanonymous:master' into master
2025-07-23 23:49:46 +03:00
comfyanonymous
5ad33787de
Add default device argument. ( #9023 )
2025-07-23 14:20:49 -04:00
patientx
bd33a5d382
Merge branch 'comfyanonymous:master' into master
2025-07-23 03:27:52 +03:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. ( #9004 )
2025-07-22 15:20:09 -04:00
patientx
c1ce148f41
Merge branch 'comfyanonymous:master' into master
2025-06-26 11:34:14 +03:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. ( #8672 )
2025-06-26 03:39:09 -04:00
patientx
8909c12ea9
Update model_management.py
2025-06-14 13:27:49 +03:00
patientx
06ac233007
Merge branch 'comfyanonymous:master' into master
2025-06-10 20:34:42 +03:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. ( #8485 )
2025-06-10 13:06:24 -04:00
patientx
4bc3866c67
Merge branch 'comfyanonymous:master' into master
2025-06-09 21:10:00 +03:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. ( #8474 )
...
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
patientx
b4d015f5f3
Merge branch 'comfyanonymous:master' into master
2025-06-08 21:21:41 +03:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 ( #8464 )
2025-06-08 14:15:34 -04:00
patientx
156aedd995
Merge branch 'comfyanonymous:master' into master
2025-06-07 19:30:45 +03:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. ( #8453 )
2025-06-07 10:01:15 -04:00
patientx
07b8d211e6
Merge branch 'comfyanonymous:master' into master
2025-05-30 23:48:15 +03:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. ( #8348 )
2025-05-30 15:41:02 -04:00
patientx
8609a6dced
Merge branch 'comfyanonymous:master' into master
2025-05-27 01:03:35 +03:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. ( #8294 )
2025-05-26 16:39:27 -04:00
patientx
bbcb33ea72
Merge branch 'comfyanonymous:master' into master
2025-05-26 16:26:39 +03:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 ( #8282 )
2025-05-26 04:29:25 -04:00
patientx
3b69a08c08
Merge branch 'comfyanonymous:master' into master
2025-05-24 04:06:28 +03:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. ( #8257 )
...
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master
2025-04-30 13:46:06 +03:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. ( #7873 )
2025-04-29 20:28:52 -04:00
patientx
0aeb958ea5
Merge branch 'comfyanonymous:master' into master
2025-04-29 01:49:37 +03:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. ( #7856 )
2025-04-28 13:07:21 -04:00
patientx
6244dfa1e1
Merge branch 'comfyanonymous:master' into master
2025-04-28 01:13:53 +03:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. ( #7833 )
2025-04-27 05:38:11 -04:00
patientx
9cc8e2e1d0
Merge branch 'comfyanonymous:master' into master
2025-04-26 23:32:14 +03:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. ( #7820 )
...
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
patientx
a397c3aeb3
Merge branch 'comfyanonymous:master' into master
2025-04-22 13:28:29 +03:00
comfyanonymous
2d6805ce57
Add option for using fp8_e8m0fnu for model weights. ( #7733 )
...
Seems to break every model I have tried but worth testing?
2025-04-22 06:17:38 -04:00
patientx
4541842b9a
Merge branch 'comfyanonymous:master' into master
2025-04-03 03:15:32 +03:00
BiologicalExplosion
2222cf67fd
MLU memory optimization ( #7470 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
patientx
1040220970
Merge branch 'comfyanonymous:master' into master
2025-04-01 22:56:01 +03:00
BVH
301e26b131
Add option to store TE in bf16 ( #7461 )
2025-04-01 13:48:53 -04:00
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master
2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1
Support more float8 types.
2025-03-25 05:23:49 -04:00
patientx
eaf40b802d
Merge branch 'comfyanonymous:master' into master
2025-03-14 12:00:25 +03:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. ( #7223 )
...
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00