Commit Graph

273 Commits

Author SHA1 Message Date
patientx
4590f75633
Merge branch 'comfyanonymous:master' into master 2024-12-27 09:59:51 +03:00
comfyanonymous
160ca08138 Use python 3.9 in launch test instead of 3.8
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji
c4bfdba330
Support ascend npu (#5436)
* support ascend npu

Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
patientx
49fa16cc7a
Merge branch 'comfyanonymous:master' into master 2024-12-25 14:05:18 +03:00
comfyanonymous
19a64d6291 Cleanup some mac related code. 2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08 Disable bfloat16 on older mac. 2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f Clean up the VAE dtypes code. 2024-12-25 04:50:34 -05:00
patientx
00afa8b34f
Merge branch 'comfyanonymous:master' into master 2024-12-23 11:36:49 +03:00
comfyanonymous
15564688ed Add a try except block so if torch version is weird it won't crash. 2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. (#6112)
* Add oneAPI device selector and some other minor changes.

* Fix device selector variable name.

* Flip minor version check sign.

* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
patientx
403a081215
Merge branch 'comfyanonymous:master' into master 2024-12-23 10:33:06 +03:00
comfyanonymous
e44d0ac7f7 Make --novram completely offload weights.
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.

Remove useless import.
2024-12-23 01:51:08 -05:00
patientx
e9d8cad2f0
Merge branch 'comfyanonymous:master' into master 2024-12-22 16:56:29 +03:00
comfyanonymous
57f330caf9 Relax minimum ratio of weights loaded in memory on nvidia.
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.

Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
patientx
37fc9a3ff2
Merge branch 'comfyanonymous:master' into master 2024-12-21 00:37:40 +03:00
Chenlei Hu
d7969cb070
Replace print with logging (#6138)
* Replace print with logging

* nit

* nit

* nit

* nit

* nit

* nit
2024-12-20 16:24:55 -05:00
patientx
ebf13dfe56
Merge branch 'comfyanonymous:master' into master 2024-12-20 10:05:56 +03:00
comfyanonymous
2dda7c11a3 More proper fix for the memory issue. 2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7 Fix lowvram bug when using a model multiple times in a row.
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
patientx
c062723ca5
Merge branch 'comfyanonymous:master' into master 2024-12-18 10:16:43 +03:00
comfyanonymous
37e5390f5f Add: --use-sage-attention to enable SageAttention.
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
patientx
3218ed8559
Merge branch 'comfyanonymous:master' into master 2024-12-13 11:21:47 +03:00
Chenlei Hu
d9d7f3c619
Lint all unused variables (#5989)
* Enable F841

* Autofix

* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
patientx
5d059779d3
Merge branch 'comfyanonymous:master' into master 2024-12-12 15:26:42 +03:00
comfyanonymous
fd5dfb812c Set initial load devices for te and model to mps device on mac. 2024-12-12 06:00:31 -05:00
patientx
b826d3e8c2
Merge branch 'comfyanonymous:master' into master 2024-12-03 14:51:59 +03:00
comfyanonymous
57e8bf6a9f Fix case where a memory leak could cause crash.
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
patientx
1d7cbcdcb2
Update model_management.py 2024-12-03 01:35:06 +03:00
patientx
9dea868c65 Reapply "Merge branch 'comfyanonymous:master' into master"
This reverts commit f3968d1611.
2024-12-03 00:45:31 +03:00
patientx
f3968d1611 Revert "Merge branch 'comfyanonymous:master' into master"
This reverts commit 605425bdd6, reversing
changes made to 74e6ad95f7.
2024-12-03 00:10:22 +03:00
patientx
605425bdd6
Merge branch 'comfyanonymous:master' into master 2024-12-02 23:10:52 +03:00
comfyanonymous
79d5ceae6e
Improved memory management. (#5450)
* Less fragile memory management.

* Fix issue.

* Remove useless function.

* Prevent and detect some types of memory leaks.

* Run garbage collector when switching workflow if needed.

* Fix issue.
2024-12-02 14:39:34 -05:00
patientx
fc8f411fa8
Merge branch 'comfyanonymous:master' into master 2024-11-25 13:31:28 +03:00
comfyanonymous
61196d8857 Add option to inference the diffusion model in fp32 and fp64. 2024-11-25 05:00:23 -05:00
patientx
4526136e42
Merge branch 'comfyanonymous:master' into master 2024-11-01 13:16:45 +03:00
comfyanonymous
1af4a47fd1 Bump up mac version for attention upcast bug workaround. 2024-10-31 15:15:31 -04:00
patientx
5a425aeda1
Merge branch 'comfyanonymous:master' into master 2024-10-20 21:06:24 +03:00
comfyanonymous
471cd3eace fp8 casting is fast on GPUs that support fp8 compute. 2024-10-20 00:54:47 -04:00
patientx
d4b509799f
Merge branch 'comfyanonymous:master' into master 2024-10-18 11:14:20 +03:00
comfyanonymous
67158994a4 Use the lowvram cast_to function for everything. 2024-10-17 17:25:56 -04:00
patientx
f9eab05f54
Merge branch 'comfyanonymous:master' into master 2024-10-10 10:30:17 +03:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention (#5191) 2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
patientx
ae611c9b61
Merge branch 'comfyanonymous:master' into master 2024-09-21 13:51:37 +03:00
comfyanonymous
dc96a1ae19 Load controlnet in fp8 if weights are in fp8. 2024-09-21 04:50:12 -04:00
patientx
9e8686df8d
Merge branch 'comfyanonymous:master' into master 2024-09-19 19:57:21 +03:00
Simon Lui
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. (#4978) 2024-09-19 05:11:42 -04:00
patientx
6fdbaf1a76
Merge branch 'comfyanonymous:master' into master 2024-09-05 12:04:05 +03:00
comfyanonymous
c7427375ee Prioritize freeing partially offloaded models first. 2024-09-04 19:47:32 -04:00
patientx
894c727ce2
Update model_management.py 2024-09-05 00:05:54 +03:00