Benjamin Berman
9c9df424b4
Fix CUDA package with no drivers
2024-10-14 22:56:21 -07:00
doctorpangloss
8512f361fe
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-14 15:26:27 -07:00
doctorpangloss
a38968f098
Improvements to execution
...
- Validation errors that occur early in the lifecycle of prompt
execution now get propagated to their callers in the
EmbeddedComfyClient. This includes error messages about missing node
classes.
- The execution context now includes the node_id and the prompt_id
- Latent previews are now sent with a node_id. This is not backwards
compatible with old frontends.
- Dependency execution errors are now modeled correctly.
- Distributed progress encodes image previews with node and prompt IDs.
- Typing for models
- The frontend was updated to use node IDs with previews
- Improvements to torch.compile experiments
- Some controlnet_aux nodes were upstreamed
2024-10-10 19:30:18 -07:00
doctorpangloss
5f26b76f59
Gracefully handle running with cuda torch on CPU only devices
2024-10-10 10:42:22 -07:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
doctorpangloss
c34403b574
Fix invalid device here
2024-10-09 11:21:19 -07:00
doctorpangloss
bbe2ed330c
Memory management and compilation improvements
...
- Experimental support for sage attention on Linux
- Diffusers loader now supports model indices
- Transformers model management now aligns with updates to ComfyUI
- Flux layers correctly use unbind
- Add float8 support for model loading in more places
- Experimental quantization approaches from Quanto and torchao
- Model upscaling interacts with memory management better
This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.
2024-10-09 09:13:47 -07:00
doctorpangloss
d25394d386
API now supports fire-and-forget, checking on queue status; prefetch_count now expressly set to 1 for workers
2024-09-27 12:07:54 -07:00
doctorpangloss
fa3176f96f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-23 12:50:31 -07:00
comfyanonymous
dc96a1ae19
Load controlnet in fp8 if weights are in fp8.
2024-09-21 04:50:12 -04:00
Simon Lui
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. ( #4978 )
2024-09-19 05:11:42 -04:00
doctorpangloss
db423f8013
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-05 09:23:00 -07:00
comfyanonymous
c7427375ee
Prioritize freeing partially offloaded models first.
2024-09-04 19:47:32 -04:00
doctorpangloss
38bcd9fcbd
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-03 15:28:52 -07:00
comfyanonymous
8d31a6632f
Speed up inference on nvidia 10 series on Linux.
2024-09-01 17:29:31 -04:00
comfyanonymous
b643eae08b
Make minimum_inference_memory() depend on --reserve-vram
2024-09-01 01:18:34 -04:00
comfyanonymous
935ae153e1
Cleanup.
2024-08-30 12:53:59 -04:00
doctorpangloss
fd503d8a96
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-29 16:37:30 -07:00
comfyanonymous
38c22e631a
Fix case where model was not properly unloaded in merging workflows.
2024-08-27 19:03:51 -04:00
doctorpangloss
5155a3e248
Merge WIP
2024-08-25 18:52:29 -07:00
comfyanonymous
5d8bbb7281
Cleanup.
2024-08-23 04:06:27 -04:00
comfyanonymous
2c1d2375d6
Fix.
2024-08-23 04:04:55 -04:00
Simon Lui
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). ( #4562 )
2024-08-23 03:59:57 -04:00
comfyanonymous
7c6bb84016
Code cleanups.
2024-08-22 17:05:12 -04:00
comfyanonymous
c54d3ed5e6
Fix issue with models staying loaded in memory.
2024-08-22 15:58:20 -04:00
David
7b70b266d8
Generalize MacOS version check for force-upcast-attention ( #4548 )
...
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.
I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.
See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
comfyanonymous
843a7ff70c
fp16 is actually faster than fp32 on a GTX 1080.
2024-08-21 23:23:50 -04:00
comfyanonymous
a60620dcea
Fix slow performance on 10 series Nvidia GPUs.
2024-08-21 16:39:02 -04:00
comfyanonymous
03ec517afb
Remove useless line, adjust windows default reserved vram.
2024-08-21 00:47:19 -04:00
comfyanonymous
9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
comfyanonymous
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
comfyanonymous
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
comfyanonymous
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
doctorpangloss
7500d02af5
Improve language models and performance, adding a translation workflow example
2024-08-15 11:09:55 -07:00
doctorpangloss
0549f35e85
Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
...
- Improvements to tests
- Fixes model management
- Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
comfyanonymous
39fb74c5bd
Fix bug when model cannot be partially unloaded.
2024-08-13 03:57:55 -04:00
comfyanonymous
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
comfyanonymous
b8ffb2937f
Memory tweaks.
2024-08-12 15:07:11 -04:00
comfyanonymous
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
comfyanonymous
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
comfyanonymous
1de69fe4d5
Fix some issues with inference slowing down.
2024-08-10 16:21:25 -04:00
comfyanonymous
55ad9d5f8c
Fix regression.
2024-08-09 03:36:40 -04:00
comfyanonymous
037c38eb0f
Try to improve inference speed on some machines.
2024-08-08 17:29:27 -04:00
comfyanonymous
66d4233210
Fix.
2024-08-08 15:16:51 -04:00
comfyanonymous
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
doctorpangloss
963ede9867
Fix catastrophic indentation bug
2024-08-06 23:14:24 -07:00
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
comfyanonymous
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
2024-08-06 03:22:39 -04:00
doctorpangloss
8ab6b4b697
Fix running on CPU again
2024-08-05 17:20:28 -07:00
doctorpangloss
39c6335331
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-05 16:13:20 -07:00
comfyanonymous
8edbcf5209
Improve performance on some lowend GPUs.
2024-08-05 16:24:04 -04:00
comfyanonymous
f7a5107784
Fix crash.
2024-08-03 16:55:38 -04:00
comfyanonymous
91be9c2867
Tweak lowvram memory formula.
2024-08-03 16:44:50 -04:00
comfyanonymous
03c5018c98
Lower lowvram memory to 1/3 of free memory.
2024-08-03 15:14:07 -04:00
comfyanonymous
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
comfyanonymous
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
doctorpangloss
d9ba795385
Fixes for tests and completing merge
...
- huggingface cache is now better used on platforms that support
symlinking and the files you are requesting already exist in the
cache
- absolute imports were changed to relative in the correct places
- StringEnumRequestParameter has a special case in validation
- fix model_management whitespace issue
- fix comfy.ops references
2024-08-01 18:28:51 -07:00
doctorpangloss
0a1ae64b0b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-01 16:19:11 -07:00
comfyanonymous
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
comfyanonymous
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
comfyanonymous
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
2024-08-01 09:42:17 -04:00
comfyanonymous
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
2024-08-01 03:58:58 -04:00
doctorpangloss
ce5fe01768
Improve performance and memory management of upscale models, improve messaging on models loaded and unloaded from the GPU
2024-07-30 17:05:53 -07:00
doctorpangloss
72baecad87
Improve logging and tracing for validation errors
2024-07-16 12:26:30 -07:00
doctorpangloss
b6b97574dc
Handle CPU torch more gracefully
2024-07-05 10:47:06 -07:00
doctorpangloss
dbc2a4ba29
Disable this lint warning
2024-06-28 18:45:15 -07:00
doctorpangloss
8cdc246450
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-17 16:19:48 -07:00
comfyanonymous
6425252c4f
Use fp16 as the default vae dtype for the audio VAE.
2024-06-16 13:12:54 -04:00
comfyanonymous
0ec513d877
Add a --force-channels-last to inference models in channel last mode.
2024-06-15 01:08:12 -04:00
Max Tretikov
8b091f02de
Add xformer.ops imports
2024-06-14 14:09:46 -06:00
Max Tretikov
9cf4f9830f
Coalesce directml_enabled and directml_device into one variable
2024-06-14 13:16:05 -06:00
Max Tretikov
a919272e3b
Fix errors in model_management.py
2024-06-14 00:17:47 -06:00
Simon Lui
5eb98f0092
Exempt IPEX from non_blocking previews fixing segmentation faults. ( #3708 )
2024-06-13 18:51:14 -04:00
doctorpangloss
cac6690481
Add known SD3 model files, merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-12 10:56:41 -07:00
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
2024-06-11 17:03:26 -04:00
doctorpangloss
d778277a68
Merge with upstream and fix tests
2024-06-10 10:01:08 -07:00
comfyanonymous
104fcea0c8
Add function to get the list of currently loaded models.
2024-06-05 23:25:16 -04:00
comfyanonymous
b1fd26fe9e
pytorch xpu should be flash or mem efficient attention?
2024-06-04 17:44:14 -04:00
doctorpangloss
3f559135c6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-03 11:42:55 -07:00
comfyanonymous
b249862080
Add an annoying print to a function I want to remove.
2024-06-01 12:47:31 -04:00
doctorpangloss
cb557c960b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-31 07:42:11 -07:00
comfyanonymous
bf3e334d46
Disable non_blocking when --deterministic or directml.
2024-05-30 11:07:38 -04:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
doctorpangloss
a79ccd625f
bf16 selection for AMD
2024-05-22 22:45:15 -07:00
doctorpangloss
35cf996b68
ROCm 6.0 seems to require get_device_name to be called before memory methods in order to return valid data
2024-05-22 22:09:07 -07:00
comfyanonymous
6c23854f54
Fix OSX latent2rgb previews.
2024-05-22 13:56:28 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
doctorpangloss
f69b6225c0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-20 12:06:35 -07:00
comfyanonymous
09e069ae6c
Log the pytorch version.
2024-05-20 06:22:29 -04:00
doctorpangloss
519cddcefc
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-17 14:04:44 -07:00
doctorpangloss
cb45b86b63
Patch torch device code here
2024-05-17 07:19:15 -07:00
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
2024-05-17 00:31:32 -04:00
doctorpangloss
3d98440fb7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-16 14:28:49 -07:00
doctorpangloss
8741cb3ce8
LLM support in ComfyUI
...
- Currently uses `transformers`
- Supports model management and correctly loading and unloading models
based on what your machine can support
- Includes a Text Diffusers 2 workflow to demonstrate text rendering in
SD1.5
2024-05-14 17:30:23 -07:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70
No longer necessary.
2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
doctorpangloss
779ff30c17
Provide a protocol for plugins to declare model-management-manageable models. Docs will be updated to specify that plugin authors should use ModelPatcher generally.
2024-05-09 16:07:18 -07:00
doctorpangloss
330ecb10b2
Merge with upstream. Remove TLS flags, because a third party proxy will do this better
2024-05-02 21:57:20 -07:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
2024-05-02 03:26:50 -04:00
doctorpangloss
f965fb2bc0
Merge upstream
2024-04-24 22:41:43 -07:00
comfyanonymous
258dbc06c3
Fix some memory related issues.
2024-04-14 12:08:58 -04:00
doctorpangloss
034ffcea03
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-04-08 10:02:37 -07:00
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
2024-04-06 18:38:39 -04:00
doctorpangloss
8f548d4d19
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-03-29 13:36:57 -07:00
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
2024-03-24 02:36:30 -04:00
doctorpangloss
005e370254
Merge upstream
2024-03-21 13:15:36 -07:00
comfyanonymous
4b9005e949
Fix regression with model merging.
2024-03-20 13:56:12 -04:00
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
2024-03-20 02:27:58 -04:00
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
2024-03-13 20:07:27 -04:00
doctorpangloss
341c9f2e90
Improvements to node loading, node API, folder paths and progress
...
- Improve node loading order. It now occurs "as late as possible".
Configuration should be exposed as per the README.
- Added methods to specify custom folders and models used in examples
more robustly for custom nodes.
- Downloading models can now be gracefully interrupted.
- Progress notifications are now sent over the network for distributed
ComfyUI operations.
- Python objects have been moved around to prevent less transitive
package importing issues.
2024-03-13 16:14:18 -07:00
doctorpangloss
93cdef65a4
Merge upstream
2024-03-12 09:49:47 -07:00
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
doctorpangloss
00728eb20f
Merge upstream
2024-03-11 09:32:57 -07:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
doctorpangloss
c0d9bc0129
Merge with upstream
2024-03-08 15:17:20 -08:00
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
2024-03-02 17:16:31 -05:00
doctorpangloss
7520691021
Merge with master
2024-02-19 10:55:22 -08:00
comfyanonymous
88f300401c
Enable fp16 by default on mps.
2024-02-19 12:00:48 -05:00
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
2024-02-17 09:01:17 -05:00
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
2024-02-16 23:01:54 -05:00
comfyanonymous
f83109f09b
Stable Cascade Stage C.
2024-02-16 10:55:08 -05:00
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
2024-02-15 21:10:10 -05:00
doctorpangloss
3367362cec
Fix directml again now that I understand what the command line is doing
2024-02-08 10:17:49 -08:00
Benjamin Berman
8508a5a853
Fix args.directml is not None error
2024-02-08 08:40:13 -08:00
doctorpangloss
d9b4607c36
Add locks to model_management to prevent multiple copies of the models from being loaded at the same time
2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843
Merge with upstream
2024-02-07 14:27:50 -08:00
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
2024-02-02 10:02:49 -05:00
doctorpangloss
82edb2ff0e
Merge with latest upstream.
2024-01-29 15:06:31 -08:00
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
2024-01-15 03:10:22 -05:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
2023-12-30 05:49:07 -05:00
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
2023-12-28 21:41:10 -05:00
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
2f9d6a97ec
Add --deterministic option to make pytorch use deterministic algorithms.
2023-12-17 16:59:21 -05:00
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8
Disable non blocking on mps.
2023-12-10 01:30:35 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00