patientx
0c4bebf5fb
Merge branch 'comfyanonymous:master' into master
2025-03-01 14:59:20 +03:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
patientx
af43425ab5
Update model_management.py
2025-02-28 16:37:55 +03:00
patientx
1871a594ba
Merge branch 'comfyanonymous:master' into master
2025-02-28 11:47:19 +03:00
comfyanonymous
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
patientx
c4fb9f2a63
Merge branch 'comfyanonymous:master' into master
2025-02-27 13:06:17 +03:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master
2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master
2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master
2025-02-22 14:42:22 +03:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master
2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master
2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master
2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master
2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master
2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master
2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master
2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py
2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py
2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention
2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master
2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master
2025-02-12 15:50:23 +03:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
patientx
196fc385e1
Merge branch 'comfyanonymous:master' into master
2025-02-11 16:38:17 +03:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
patientx
9561b03cc7
Merge branch 'comfyanonymous:master' into master
2025-02-09 15:33:03 +03:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
patientx
f77fea7fc6
Merge branch 'comfyanonymous:master' into master
2025-01-27 23:45:19 +03:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
patientx
1442e34d9e
Merge branch 'comfyanonymous:master' into master
2025-01-26 15:27:37 +03:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
patientx
c4861c74d4
UPDATED ZLUDA PATCHING METHOD
2025-01-14 19:57:22 +03:00
patientx
a5a09e45dd
Merge branch 'comfyanonymous:master' into master
2025-01-04 15:42:12 +03:00
comfyanonymous
d45ebb63f6
Remove old unused function.
2025-01-04 07:20:54 -05:00
patientx
c523c36aef
Merge branch 'comfyanonymous:master' into master
2025-01-02 19:55:49 +03:00
comfyanonymous
9e9c8a1c64
Clear cache as often on AMD as Nvidia.
...
I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
patientx
4590f75633
Merge branch 'comfyanonymous:master' into master
2024-12-27 09:59:51 +03:00
comfyanonymous
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
patientx
49fa16cc7a
Merge branch 'comfyanonymous:master' into master
2024-12-25 14:05:18 +03:00
comfyanonymous
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
patientx
00afa8b34f
Merge branch 'comfyanonymous:master' into master
2024-12-23 11:36:49 +03:00
comfyanonymous
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
patientx
403a081215
Merge branch 'comfyanonymous:master' into master
2024-12-23 10:33:06 +03:00
comfyanonymous
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
patientx
e9d8cad2f0
Merge branch 'comfyanonymous:master' into master
2024-12-22 16:56:29 +03:00
comfyanonymous
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
patientx
37fc9a3ff2
Merge branch 'comfyanonymous:master' into master
2024-12-21 00:37:40 +03:00
Chenlei Hu
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
patientx
ebf13dfe56
Merge branch 'comfyanonymous:master' into master
2024-12-20 10:05:56 +03:00
comfyanonymous
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
patientx
c062723ca5
Merge branch 'comfyanonymous:master' into master
2024-12-18 10:16:43 +03:00
comfyanonymous
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
patientx
3218ed8559
Merge branch 'comfyanonymous:master' into master
2024-12-13 11:21:47 +03:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
patientx
5d059779d3
Merge branch 'comfyanonymous:master' into master
2024-12-12 15:26:42 +03:00
comfyanonymous
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
patientx
b826d3e8c2
Merge branch 'comfyanonymous:master' into master
2024-12-03 14:51:59 +03:00
comfyanonymous
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
patientx
1d7cbcdcb2
Update model_management.py
2024-12-03 01:35:06 +03:00
patientx
9dea868c65
Reapply "Merge branch 'comfyanonymous:master' into master"
...
This reverts commit f3968d1611 .
2024-12-03 00:45:31 +03:00
patientx
f3968d1611
Revert "Merge branch 'comfyanonymous:master' into master"
...
This reverts commit 605425bdd6 , reversing
changes made to 74e6ad95f7 .
2024-12-03 00:10:22 +03:00
patientx
605425bdd6
Merge branch 'comfyanonymous:master' into master
2024-12-02 23:10:52 +03:00
comfyanonymous
79d5ceae6e
Improved memory management. ( #5450 )
...
* Less fragile memory management.
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Fix issue.
2024-12-02 14:39:34 -05:00
patientx
fc8f411fa8
Merge branch 'comfyanonymous:master' into master
2024-11-25 13:31:28 +03:00
comfyanonymous
61196d8857
Add option to inference the diffusion model in fp32 and fp64.
2024-11-25 05:00:23 -05:00
patientx
4526136e42
Merge branch 'comfyanonymous:master' into master
2024-11-01 13:16:45 +03:00
comfyanonymous
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
patientx
5a425aeda1
Merge branch 'comfyanonymous:master' into master
2024-10-20 21:06:24 +03:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
patientx
d4b509799f
Merge branch 'comfyanonymous:master' into master
2024-10-18 11:14:20 +03:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
patientx
f9eab05f54
Merge branch 'comfyanonymous:master' into master
2024-10-10 10:30:17 +03:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
patientx
ae611c9b61
Merge branch 'comfyanonymous:master' into master
2024-09-21 13:51:37 +03:00
comfyanonymous
dc96a1ae19
Load controlnet in fp8 if weights are in fp8.
2024-09-21 04:50:12 -04:00
patientx
9e8686df8d
Merge branch 'comfyanonymous:master' into master
2024-09-19 19:57:21 +03:00
Simon Lui
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. ( #4978 )
2024-09-19 05:11:42 -04:00
patientx
6fdbaf1a76
Merge branch 'comfyanonymous:master' into master
2024-09-05 12:04:05 +03:00
comfyanonymous
c7427375ee
Prioritize freeing partially offloaded models first.
2024-09-04 19:47:32 -04:00
patientx
894c727ce2
Update model_management.py
2024-09-05 00:05:54 +03:00
patientx
93fa5c9ebb
Merge branch 'comfyanonymous:master' into master
2024-09-02 10:03:48 +03:00
comfyanonymous
8d31a6632f
Speed up inference on nvidia 10 series on Linux.
2024-09-01 17:29:31 -04:00
patientx
f02c0d3ed9
Merge branch 'comfyanonymous:master' into master
2024-09-01 14:34:56 +03:00
comfyanonymous
b643eae08b
Make minimum_inference_memory() depend on --reserve-vram
2024-09-01 01:18:34 -04:00
patientx
acc3d6a2ea
Update model_management.py
2024-08-30 20:13:28 +03:00
patientx
51af2440ef
Update model_management.py
2024-08-30 20:10:47 +03:00
patientx
3e226f02f3
Update model_management.py
2024-08-30 20:08:18 +03:00
comfyanonymous
935ae153e1
Cleanup.
2024-08-30 12:53:59 -04:00
patientx
7056a6aa6f
Merge branch 'comfyanonymous:master' into master
2024-08-28 09:36:30 +03:00
comfyanonymous
38c22e631a
Fix case where model was not properly unloaded in merging workflows.
2024-08-27 19:03:51 -04:00
patientx
134569ea48
Update model_management.py
2024-08-23 14:10:09 +03:00
patientx
c98e8a0a55
Merge branch 'comfyanonymous:master' into master
2024-08-23 12:31:51 +03:00
comfyanonymous
5d8bbb7281
Cleanup.
2024-08-23 04:06:27 -04:00
patientx
9f87d61bfe
Merge branch 'comfyanonymous:master' into master
2024-08-23 11:04:56 +03:00
comfyanonymous
2c1d2375d6
Fix.
2024-08-23 04:04:55 -04:00
Simon Lui
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). ( #4562 )
2024-08-23 03:59:57 -04:00
patientx
1ef90b7ac8
Merge branch 'comfyanonymous:master' into master
2024-08-23 00:55:19 +03:00
comfyanonymous
7c6bb84016
Code cleanups.
2024-08-22 17:05:12 -04:00
patientx
dec75f11e4
Merge branch 'comfyanonymous:master' into master
2024-08-22 23:36:58 +03:00
comfyanonymous
c54d3ed5e6
Fix issue with models staying loaded in memory.
2024-08-22 15:58:20 -04:00
David
7b70b266d8
Generalize MacOS version check for force-upcast-attention ( #4548 )
...
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.
I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.
See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
patientx
0cd8a740bb
Merge branch 'comfyanonymous:master' into master
2024-08-22 14:01:42 +03:00
comfyanonymous
843a7ff70c
fp16 is actually faster than fp32 on a GTX 1080.
2024-08-21 23:23:50 -04:00
patientx
febf8601dc
Merge branch 'comfyanonymous:master' into master
2024-08-22 00:07:14 +03:00
comfyanonymous
a60620dcea
Fix slow performance on 10 series Nvidia GPUs.
2024-08-21 16:39:02 -04:00
patientx
ac75d4e4e0
Merge branch 'comfyanonymous:master' into master
2024-08-21 09:49:29 +03:00
comfyanonymous
03ec517afb
Remove useless line, adjust windows default reserved vram.
2024-08-21 00:47:19 -04:00
patientx
5656b5b956
Merge branch 'comfyanonymous:master' into master
2024-08-20 23:07:54 +03:00
comfyanonymous
9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
comfyanonymous
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
patientx
b20f5b1e32
Merge branch 'comfyanonymous:master' into master
2024-08-20 00:31:41 +03:00
comfyanonymous
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
comfyanonymous
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
patientx
7bd86b0896
Update model_management.py
2024-08-16 00:53:38 +03:00
comfyanonymous
39fb74c5bd
Fix bug when model cannot be partially unloaded.
2024-08-13 03:57:55 -04:00
comfyanonymous
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
comfyanonymous
b8ffb2937f
Memory tweaks.
2024-08-12 15:07:11 -04:00
comfyanonymous
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
comfyanonymous
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
comfyanonymous
1de69fe4d5
Fix some issues with inference slowing down.
2024-08-10 16:21:25 -04:00
comfyanonymous
55ad9d5f8c
Fix regression.
2024-08-09 03:36:40 -04:00
comfyanonymous
037c38eb0f
Try to improve inference speed on some machines.
2024-08-08 17:29:27 -04:00
comfyanonymous
66d4233210
Fix.
2024-08-08 15:16:51 -04:00
comfyanonymous
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00