Commit Graph

375 Commits

Author SHA1 Message Date
doctorpangloss
a7aff3565b Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-26 16:57:25 -07:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. (#8672) 2025-06-26 03:39:09 -04:00
doctorpangloss
1d901e32eb Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 13:20:31 -07:00
doctorpangloss
82388d51a2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 10:35:10 -07:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. (#8485) 2025-06-10 13:06:24 -04:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 (#8464) 2025-06-08 14:15:34 -04:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. (#8453) 2025-06-07 10:01:15 -04:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348) 2025-05-30 15:41:02 -04:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. (#8294) 2025-05-26 16:39:27 -04:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 (#8282) 2025-05-26 04:29:25 -04:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
Benjamin Berman
b6d3f1fb08 Accept workflows from the command line 2025-05-07 14:53:39 -07:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. (#7873) 2025-04-29 20:28:52 -04:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. (#7856) 2025-04-28 13:07:21 -04:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. (#7833) 2025-04-27 05:38:11 -04:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. (#7820)
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
comfyanonymous
2d6805ce57
Add option for using fp8_e8m0fnu for model weights. (#7733)
Seems to break every model I have tried but worth testing?
2025-04-22 06:17:38 -04:00
doctorpangloss
5823497d55 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-04-21 13:14:36 -07:00
doctorpangloss
ffc1912eff Fix issues with tests 2025-04-04 08:27:33 -07:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
doctorpangloss
040a324346 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-03-29 15:57:24 -07:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93 Fix. 2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed Print mac version. 2025-03-13 10:05:40 -04:00
comfyanonymous
0952569493 Fix stable cascade VAE on some lowvram machines. 2025-03-08 20:24:04 -05:00
doctorpangloss
83948cafd1 WAN 2.1 support 2025-03-06 07:32:04 -08:00
doctorpangloss
3c82be86d1 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-03-05 14:38:50 -08:00
doctorpangloss
d82261485f Prompt upsampling, better torch.compile support for language models 2025-03-03 18:36:47 -08:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options (#7024) 2025-03-01 02:37:35 -05:00
comfyanonymous
cf0b549d48 --fast now takes a number as argument to indicate how fast you want it.
The idea is that you can indicate how much quality vs speed you want.

At the moment:

--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.

--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. 2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. 2025-02-27 16:39:57 -05:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
doctorpangloss
048746f58b Update to 0.3.15 and improve models
- Cosmos now fully tested
 - Preliminary support for essential Cosmos prompt "upsampler"
 - Lumina tests
 - Tweaks to language and image resizing nodes
 - Fix for #31 all the samplers are now present again
2025-02-24 21:27:15 -08:00
doctorpangloss
693038738a Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-02-24 09:39:26 -08:00
comfyanonymous
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
comfyanonymous
ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation 2025-02-23 04:45:54 -05:00
comfyanonymous
072db3bea6 Assume the mac black image bug won't be fixed before v16. 2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a Latest mac still has the black image bug. 2025-02-21 20:14:30 -05:00
comfyanonymous
41c30e92e7 Let all model memory be offloaded on nvidia. 2025-02-21 06:32:21 -05:00
comfyanonymous
12da6ef581 Apparently directml supports fp16. 2025-02-20 09:30:24 -05:00
comfyanonymous
b07258cef2 Fix typo.
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
comfyanonymous
31e54b7052 Improve AMD arch detection. 2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3 bf16 manual cast works on old AMD. 2025-02-17 04:42:40 -05:00
comfyanonymous
530412cb9d Refactor torch version checks to be more future proof. 2025-02-17 04:36:45 -05:00
comfyanonymous
e2919d38b4 Disable bf16 on AMD GPUs that don't support it. 2025-02-16 05:46:10 -05:00
comfyanonymous
1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00