comfyanonymous
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
doctorpangloss
7655be873c
Updates to support Hunyuan Video
2024-12-25 22:39:12 -08:00
comfyanonymous
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
doctorpangloss
0fd407ae87
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-24 16:48:03 -08:00
comfyanonymous
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
comfyanonymous
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
comfyanonymous
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
Chenlei Hu
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
comfyanonymous
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
comfyanonymous
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
comfyanonymous
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
doctorpangloss
2d1676c717
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-12-09 15:54:37 -08:00
comfyanonymous
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
comfyanonymous
79d5ceae6e
Improved memory management. ( #5450 )
...
* Less fragile memory management.
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Fix issue.
2024-12-02 14:39:34 -05:00
comfyanonymous
61196d8857
Add option to inference the diffusion model in fp32 and fp64.
2024-11-25 05:00:23 -05:00
doctorpangloss
9d20de6462
Merge branch 'improved_memory' of github.com:comfyanonymous/ComfyUI
2024-11-19 11:06:27 -08:00
doctorpangloss
772e768fe8
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-11-04 10:17:26 -08:00
comfyanonymous
95972bab86
Fix issue.
2024-11-04 05:07:07 -05:00
doctorpangloss
31eacb6ac9
Improve compilation of models, adding support for triton
2024-11-01 10:40:58 -07:00
comfyanonymous
bd5d8f150f
Prevent and detect some types of memory leaks.
2024-11-01 06:55:42 -04:00
comfyanonymous
975927cc79
Remove useless function.
2024-11-01 04:40:33 -04:00
comfyanonymous
1735d4fb01
Fix issue.
2024-11-01 04:25:27 -04:00
comfyanonymous
d8bd2a9baa
Less fragile memory management.
2024-11-01 02:41:51 -04:00
comfyanonymous
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
doctorpangloss
76a80a65ea
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-29 15:35:39 -07:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
Benjamin Berman
e5fc19a25b
Improve vanilla node importing and fix CUDA on CPU devices bug
2024-10-15 00:02:06 -07:00
Benjamin Berman
9c9df424b4
Fix CUDA package with no drivers
2024-10-14 22:56:21 -07:00
doctorpangloss
8512f361fe
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-14 15:26:27 -07:00
doctorpangloss
a38968f098
Improvements to execution
...
- Validation errors that occur early in the lifecycle of prompt
execution now get propagated to their callers in the
EmbeddedComfyClient. This includes error messages about missing node
classes.
- The execution context now includes the node_id and the prompt_id
- Latent previews are now sent with a node_id. This is not backwards
compatible with old frontends.
- Dependency execution errors are now modeled correctly.
- Distributed progress encodes image previews with node and prompt IDs.
- Typing for models
- The frontend was updated to use node IDs with previews
- Improvements to torch.compile experiments
- Some controlnet_aux nodes were upstreamed
2024-10-10 19:30:18 -07:00
doctorpangloss
5f26b76f59
Gracefully handle running with cuda torch on CPU only devices
2024-10-10 10:42:22 -07:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
doctorpangloss
c34403b574
Fix invalid device here
2024-10-09 11:21:19 -07:00
doctorpangloss
bbe2ed330c
Memory management and compilation improvements
...
- Experimental support for sage attention on Linux
- Diffusers loader now supports model indices
- Transformers model management now aligns with updates to ComfyUI
- Flux layers correctly use unbind
- Add float8 support for model loading in more places
- Experimental quantization approaches from Quanto and torchao
- Model upscaling interacts with memory management better
This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.
2024-10-09 09:13:47 -07:00
doctorpangloss
d25394d386
API now supports fire-and-forget, checking on queue status; prefetch_count now expressly set to 1 for workers
2024-09-27 12:07:54 -07:00
doctorpangloss
fa3176f96f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-23 12:50:31 -07:00
comfyanonymous
dc96a1ae19
Load controlnet in fp8 if weights are in fp8.
2024-09-21 04:50:12 -04:00
Simon Lui
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. ( #4978 )
2024-09-19 05:11:42 -04:00
doctorpangloss
db423f8013
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-05 09:23:00 -07:00
comfyanonymous
c7427375ee
Prioritize freeing partially offloaded models first.
2024-09-04 19:47:32 -04:00
doctorpangloss
38bcd9fcbd
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-03 15:28:52 -07:00
comfyanonymous
8d31a6632f
Speed up inference on nvidia 10 series on Linux.
2024-09-01 17:29:31 -04:00
comfyanonymous
b643eae08b
Make minimum_inference_memory() depend on --reserve-vram
2024-09-01 01:18:34 -04:00
comfyanonymous
935ae153e1
Cleanup.
2024-08-30 12:53:59 -04:00
doctorpangloss
fd503d8a96
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-29 16:37:30 -07:00
comfyanonymous
38c22e631a
Fix case where model was not properly unloaded in merging workflows.
2024-08-27 19:03:51 -04:00
doctorpangloss
5155a3e248
Merge WIP
2024-08-25 18:52:29 -07:00
comfyanonymous
5d8bbb7281
Cleanup.
2024-08-23 04:06:27 -04:00
comfyanonymous
2c1d2375d6
Fix.
2024-08-23 04:04:55 -04:00
Simon Lui
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). ( #4562 )
2024-08-23 03:59:57 -04:00
comfyanonymous
7c6bb84016
Code cleanups.
2024-08-22 17:05:12 -04:00
comfyanonymous
c54d3ed5e6
Fix issue with models staying loaded in memory.
2024-08-22 15:58:20 -04:00
David
7b70b266d8
Generalize MacOS version check for force-upcast-attention ( #4548 )
...
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.
I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.
See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
comfyanonymous
843a7ff70c
fp16 is actually faster than fp32 on a GTX 1080.
2024-08-21 23:23:50 -04:00
comfyanonymous
a60620dcea
Fix slow performance on 10 series Nvidia GPUs.
2024-08-21 16:39:02 -04:00
comfyanonymous
03ec517afb
Remove useless line, adjust windows default reserved vram.
2024-08-21 00:47:19 -04:00
comfyanonymous
9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
comfyanonymous
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
comfyanonymous
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
comfyanonymous
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
doctorpangloss
7500d02af5
Improve language models and performance, adding a translation workflow example
2024-08-15 11:09:55 -07:00
doctorpangloss
0549f35e85
Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
...
- Improvements to tests
- Fixes model management
- Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
comfyanonymous
39fb74c5bd
Fix bug when model cannot be partially unloaded.
2024-08-13 03:57:55 -04:00
comfyanonymous
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
comfyanonymous
b8ffb2937f
Memory tweaks.
2024-08-12 15:07:11 -04:00
comfyanonymous
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
comfyanonymous
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
comfyanonymous
1de69fe4d5
Fix some issues with inference slowing down.
2024-08-10 16:21:25 -04:00
comfyanonymous
55ad9d5f8c
Fix regression.
2024-08-09 03:36:40 -04:00
comfyanonymous
037c38eb0f
Try to improve inference speed on some machines.
2024-08-08 17:29:27 -04:00
comfyanonymous
66d4233210
Fix.
2024-08-08 15:16:51 -04:00
comfyanonymous
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
doctorpangloss
963ede9867
Fix catastrophic indentation bug
2024-08-06 23:14:24 -07:00
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
comfyanonymous
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
2024-08-06 03:22:39 -04:00
doctorpangloss
8ab6b4b697
Fix running on CPU again
2024-08-05 17:20:28 -07:00
doctorpangloss
39c6335331
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-05 16:13:20 -07:00
comfyanonymous
8edbcf5209
Improve performance on some lowend GPUs.
2024-08-05 16:24:04 -04:00
comfyanonymous
f7a5107784
Fix crash.
2024-08-03 16:55:38 -04:00
comfyanonymous
91be9c2867
Tweak lowvram memory formula.
2024-08-03 16:44:50 -04:00
comfyanonymous
03c5018c98
Lower lowvram memory to 1/3 of free memory.
2024-08-03 15:14:07 -04:00
comfyanonymous
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
comfyanonymous
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
doctorpangloss
d9ba795385
Fixes for tests and completing merge
...
- huggingface cache is now better used on platforms that support
symlinking and the files you are requesting already exist in the
cache
- absolute imports were changed to relative in the correct places
- StringEnumRequestParameter has a special case in validation
- fix model_management whitespace issue
- fix comfy.ops references
2024-08-01 18:28:51 -07:00
doctorpangloss
0a1ae64b0b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-01 16:19:11 -07:00
comfyanonymous
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
comfyanonymous
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
comfyanonymous
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
2024-08-01 09:42:17 -04:00