Commit Graph

357 Commits

Author SHA1 Message Date
doctorpangloss
5823497d55 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-04-21 13:14:36 -07:00
doctorpangloss
ffc1912eff Fix issues with tests 2025-04-04 08:27:33 -07:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
doctorpangloss
040a324346 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-03-29 15:57:24 -07:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93 Fix. 2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed Print mac version. 2025-03-13 10:05:40 -04:00
comfyanonymous
0952569493 Fix stable cascade VAE on some lowvram machines. 2025-03-08 20:24:04 -05:00
doctorpangloss
83948cafd1 WAN 2.1 support 2025-03-06 07:32:04 -08:00
doctorpangloss
3c82be86d1 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-03-05 14:38:50 -08:00
doctorpangloss
d82261485f Prompt upsampling, better torch.compile support for language models 2025-03-03 18:36:47 -08:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options (#7024) 2025-03-01 02:37:35 -05:00
comfyanonymous
cf0b549d48 --fast now takes a number as argument to indicate how fast you want it.
The idea is that you can indicate how much quality vs speed you want.

At the moment:

--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.

--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. 2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. 2025-02-27 16:39:57 -05:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
doctorpangloss
048746f58b Update to 0.3.15 and improve models
- Cosmos now fully tested
 - Preliminary support for essential Cosmos prompt "upsampler"
 - Lumina tests
 - Tweaks to language and image resizing nodes
 - Fix for #31 all the samplers are now present again
2025-02-24 21:27:15 -08:00
doctorpangloss
693038738a Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-02-24 09:39:26 -08:00
comfyanonymous
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
comfyanonymous
ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation 2025-02-23 04:45:54 -05:00
comfyanonymous
072db3bea6 Assume the mac black image bug won't be fixed before v16. 2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a Latest mac still has the black image bug. 2025-02-21 20:14:30 -05:00
comfyanonymous
41c30e92e7 Let all model memory be offloaded on nvidia. 2025-02-21 06:32:21 -05:00
comfyanonymous
12da6ef581 Apparently directml supports fp16. 2025-02-20 09:30:24 -05:00
comfyanonymous
b07258cef2 Fix typo.
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
comfyanonymous
31e54b7052 Improve AMD arch detection. 2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3 bf16 manual cast works on old AMD. 2025-02-17 04:42:40 -05:00
comfyanonymous
530412cb9d Refactor torch version checks to be more future proof. 2025-02-17 04:36:45 -05:00
comfyanonymous
e2919d38b4 Disable bf16 on AMD GPUs that don't support it. 2025-02-16 05:46:10 -05:00
comfyanonymous
1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
comfyanonymous
d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
comfyanonymous
8773ccf74d Better memory estimation for ROCm that support mem efficient attention.
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
comfyanonymous
1d5d6586f3 Fix ruff. 2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err (#6794) 2025-02-12 06:48:11 -05:00
HishamC
b124256817
Fix for running via DirectML (#6542)
* Fix for running via DirectML

Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs

* fix formating

* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. 2025-02-11 08:33:09 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast (#6453)
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
doctorpangloss
6ab1aa1e8a Improving MLLM/VLLM support and fixing bugs
- fix #29 str(model) no longer raises exceptions like with
   HyVideoModelLoader
 - don't try to format CUDA tensors because that can sometimes raise
   exceptions
 - cudaAllocAsync has been disabled for now due to 2.6.0 bugs
 - improve florence2 support
 - add support for paligemma 2. This requires the fix for transformers
   that is currently staged in another repo, install with
   `uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs"`
 - triton has been updated
 - fix missing __init__.py files
2025-02-05 14:02:28 -08:00
doctorpangloss
dcac115f68 Revert "Update logging when models are loaded"
This reverts commit 0d15a091c2.
2025-02-04 15:18:00 -08:00
Rafał Leszko
0d15a091c2
Update logging when models are loaded
The "Loaded " log was logged even if no model were actually loaded into VRAM
2025-02-04 14:44:12 +01:00
doctorpangloss
a3452f6e6a Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-01-28 13:45:51 -08:00
comfyanonymous
255edf2246 Lower minimum ratio of loaded weights on Nvidia. 2025-01-27 05:26:51 -05:00
comfyanonymous
67feb05299 Remove redundant code. 2025-01-25 19:04:53 -05:00
doctorpangloss
631d9e44c6 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-01-16 09:58:02 -08:00
doctorpangloss
12082d877d Fix linting issues 2025-01-04 14:07:19 -08:00
comfyanonymous
d45ebb63f6 Remove old unused function. 2025-01-04 07:20:54 -05:00
comfyanonymous
9e9c8a1c64 Clear cache as often on AMD as Nvidia.
I think the issue this was working around has been solved.

If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
doctorpangloss
9d5a5dd533 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-12-28 14:24:27 -08:00