doctorpangloss
a79ccd625f
bf16 selection for AMD
2024-05-22 22:45:15 -07:00
doctorpangloss
35cf996b68
ROCm 6.0 seems to require get_device_name to be called before memory methods in order to return valid data
2024-05-22 22:09:07 -07:00
doctorpangloss
f69b6225c0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-20 12:06:35 -07:00
comfyanonymous
09e069ae6c
Log the pytorch version.
2024-05-20 06:22:29 -04:00
doctorpangloss
519cddcefc
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-17 14:04:44 -07:00
doctorpangloss
cb45b86b63
Patch torch device code here
2024-05-17 07:19:15 -07:00
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
2024-05-17 00:31:32 -04:00
doctorpangloss
3d98440fb7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-16 14:28:49 -07:00
doctorpangloss
8741cb3ce8
LLM support in ComfyUI
...
- Currently uses `transformers`
- Supports model management and correctly loading and unloading models
based on what your machine can support
- Includes a Text Diffusers 2 workflow to demonstrate text rendering in
SD1.5
2024-05-14 17:30:23 -07:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70
No longer necessary.
2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
doctorpangloss
779ff30c17
Provide a protocol for plugins to declare model-management-manageable models. Docs will be updated to specify that plugin authors should use ModelPatcher generally.
2024-05-09 16:07:18 -07:00
doctorpangloss
330ecb10b2
Merge with upstream. Remove TLS flags, because a third party proxy will do this better
2024-05-02 21:57:20 -07:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
2024-05-02 03:26:50 -04:00
doctorpangloss
f965fb2bc0
Merge upstream
2024-04-24 22:41:43 -07:00
comfyanonymous
258dbc06c3
Fix some memory related issues.
2024-04-14 12:08:58 -04:00
doctorpangloss
034ffcea03
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-04-08 10:02:37 -07:00
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
2024-04-06 18:38:39 -04:00
doctorpangloss
8f548d4d19
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-03-29 13:36:57 -07:00
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
2024-03-24 02:36:30 -04:00
doctorpangloss
005e370254
Merge upstream
2024-03-21 13:15:36 -07:00
comfyanonymous
4b9005e949
Fix regression with model merging.
2024-03-20 13:56:12 -04:00
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
2024-03-20 02:27:58 -04:00
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
2024-03-13 20:07:27 -04:00
doctorpangloss
341c9f2e90
Improvements to node loading, node API, folder paths and progress
...
- Improve node loading order. It now occurs "as late as possible".
Configuration should be exposed as per the README.
- Added methods to specify custom folders and models used in examples
more robustly for custom nodes.
- Downloading models can now be gracefully interrupted.
- Progress notifications are now sent over the network for distributed
ComfyUI operations.
- Python objects have been moved around to prevent less transitive
package importing issues.
2024-03-13 16:14:18 -07:00
doctorpangloss
93cdef65a4
Merge upstream
2024-03-12 09:49:47 -07:00
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
doctorpangloss
00728eb20f
Merge upstream
2024-03-11 09:32:57 -07:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
doctorpangloss
c0d9bc0129
Merge with upstream
2024-03-08 15:17:20 -08:00
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
2024-03-02 17:16:31 -05:00
doctorpangloss
7520691021
Merge with master
2024-02-19 10:55:22 -08:00
comfyanonymous
88f300401c
Enable fp16 by default on mps.
2024-02-19 12:00:48 -05:00
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
2024-02-17 09:01:17 -05:00
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
2024-02-16 23:01:54 -05:00
comfyanonymous
f83109f09b
Stable Cascade Stage C.
2024-02-16 10:55:08 -05:00
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
2024-02-15 21:10:10 -05:00
doctorpangloss
3367362cec
Fix directml again now that I understand what the command line is doing
2024-02-08 10:17:49 -08:00
Benjamin Berman
8508a5a853
Fix args.directml is not None error
2024-02-08 08:40:13 -08:00
doctorpangloss
d9b4607c36
Add locks to model_management to prevent multiple copies of the models from being loaded at the same time
2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843
Merge with upstream
2024-02-07 14:27:50 -08:00
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
2024-02-02 10:02:49 -05:00
doctorpangloss
82edb2ff0e
Merge with latest upstream.
2024-01-29 15:06:31 -08:00
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
2024-01-15 03:10:22 -05:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00