Commit Graph

58 Commits

Author SHA1 Message Date
comfyanonymous
442430dcef Some comments to say what the vram state options mean. 2023-06-04 17:51:04 -04:00
comfyanonymous
e136e86a13 Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
2023-06-03 11:05:37 -04:00
comfyanonymous
6b80950a41 Refactor and improve model_management code related to free memory. 2023-06-02 15:21:33 -04:00
space-nuko
7cb90ba509 More accurate total 2023-06-02 00:14:41 -05:00
space-nuko
22b707f1cf System stats endpoint 2023-06-01 23:26:23 -05:00
comfyanonymous
f929d8df00 Tweak lowvram model memory so it's closer to what it was before. 2023-06-01 04:04:35 -04:00
comfyanonymous
f6f0a25226 Empty cache on mps. 2023-06-01 03:52:51 -04:00
comfyanonymous
4af4fe017b Auto load model in lowvram if not enough memory. 2023-05-30 12:36:41 -04:00
comfyanonymous
8c539fa5bc Print the torch device that is used on startup. 2023-05-13 17:11:27 -04:00
comfyanonymous
f7e427c557 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
comfyanonymous
57f35b3d16 maximum_batch_area for xformers.
Remove useless code.
2023-05-06 19:28:46 -04:00
comfyanonymous
b877fefbb3 Lowvram mode for gligen and fix some lowvram issues. 2023-05-05 18:11:41 -04:00
comfyanonymous
93fc8c1ea9 Fix import. 2023-05-05 00:19:35 -04:00
comfyanonymous
2edaaba3c2 Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous
806786ed1d Don't try to get vram from xpu or cuda when directml is enabled. 2023-04-29 00:28:48 -04:00
comfyanonymous
e7ae3bc44c You can now select the device index with: --directml id
Like this for example: --directml 1
2023-04-28 16:51:35 -04:00
comfyanonymous
c2afcad2a5 Basic torch_directml support. Use --directml to use it. 2023-04-28 14:28:57 -04:00
comfyanonymous
e6771d0986 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
6c156642e4 Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous
3b9a2f504d Move code to empty gpu cache to model_management.py 2023-04-15 11:19:07 -04:00
comfyanonymous
4861dbb2e2 Print xformers version and warning about 0.0.18 2023-04-09 01:31:47 -04:00
comfyanonymous
d35efcbcb2 Add a --force-fp32 argument to force fp32 for debugging. 2023-04-07 00:27:54 -04:00
comfyanonymous
55a48f27db Small refactor. 2023-04-06 23:53:54 -04:00
藍+85CD
3adedd52d3 Merge branch 'master' into ipex 2023-04-07 09:11:30 +08:00
藍+85CD
01c0951a73 Fix auto lowvram detection on CUDA 2023-04-06 15:44:05 +08:00
藍+85CD
23fd4abad1 Use separate variables instead of vram_state 2023-04-06 14:24:47 +08:00
藍+85CD
772741f218 Import intel_extension_for_pytorch as ipex 2023-04-06 12:27:22 +08:00
EllangoK
cd00b46465 seperates out arg parser and imports args 2023-04-05 23:41:23 -04:00
藍+85CD
16f4d42aa0 Add basic XPU device support
closed #387
2023-04-05 21:22:14 +08:00
comfyanonymous
7597a5d83e Disable xformers in VAE when xformers == 0.0.18 2023-04-04 22:22:02 -04:00
Francesco Yoshi Gobbo
be201f0511 code cleanup 2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
28e1209e7e no lowvram state if cpu only 2023-03-27 04:51:18 +02:00
comfyanonymous
089cd3adc1 I don't think controlnets were being handled correctly by MPS. 2023-03-24 14:33:16 -04:00
Yurii Mazurevich
1206cb3386 Fixed typo 2023-03-24 19:39:55 +02:00
Yurii Mazurevich
b9b8b1893d Removed unnecessary comment 2023-03-24 14:15:30 +02:00
Yurii Mazurevich
f500d638c6 Added MPS device support 2023-03-24 14:12:56 +02:00
comfyanonymous
7cae5f5769 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
comfyanonymous
db20d0a9fe Add laptop quadro cards to fp32 list. 2023-03-21 16:57:35 -04:00
comfyanonymous
dc8b43e512 Make --cpu have priority over everything else. 2023-03-13 21:30:01 -04:00
comfyanonymous
1de721b33c Add pytorch attention support to VAE. 2023-03-13 12:45:54 -04:00
comfyanonymous
72b42ab260 --disable-xformers should not even try to import xformers. 2023-03-13 11:36:48 -04:00
comfyanonymous
7c95e1a03b Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
ca7e2e3827 Add --cpu to use the cpu for inference. 2023-03-06 10:50:50 -05:00
comfyanonymous
8141cd7f42 Fix issue. 2023-03-03 13:18:01 -05:00
comfyanonymous
5608730809 To be really simple CheckpointLoaderSimple should pick the right type. 2023-03-03 11:07:10 -05:00
comfyanonymous
b2a7f1b32a Make some cross attention functions work on the CPU. 2023-03-03 03:27:33 -05:00
comfyanonymous
b59b82a73b Add a way to interrupt current processing in the backend. 2023-03-02 14:42:03 -05:00
comfyanonymous
1304a8f8ad Small adjustment. 2023-02-27 20:04:18 -05:00
comfyanonymous
bddbd4bdb0 Enable highvram automatically when vram >> ram 2023-02-27 19:57:39 -05:00
comfyanonymous
0f13853bd2 Add: --highvram for when you want models to stay on the vram. 2023-02-17 21:27:02 -05:00