Pedro Batista
0169c95d82
Don't use xformers nor fp16 in AMD GPUs
...
Don't use xformers nor fp16 in AMD GPUs (not supported by them). Example name:
```
In [8]: torch.cuda.get_device_properties("cuda").name
Out[8]: 'AMD Radeon RX 5700 XT'
```
2023-04-05 13:14:15 -03:00
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2023-04-04 22:22:02 -04:00
Francesco Yoshi Gobbo
f55755f0d2
code cleanup
2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
cf0098d539
no lowvram state if cpu only
2023-03-27 04:51:18 +02:00
comfyanonymous
4adcea7228
I don't think controlnets were being handled correctly by MPS.
2023-03-24 14:33:16 -04:00
Yurii Mazurevich
fc71e7ea08
Fixed typo
2023-03-24 19:39:55 +02:00
Yurii Mazurevich
4b943d2b60
Removed unnecessary comment
2023-03-24 14:15:30 +02:00
Yurii Mazurevich
89fd5ed574
Added MPS device support
2023-03-24 14:12:56 +02:00
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
9d0665c8d0
Add laptop quadro cards to fp32 list.
2023-03-21 16:57:35 -04:00
comfyanonymous
ee46bef03a
Make --cpu have priority over everything else.
2023-03-13 21:30:01 -04:00
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2023-03-13 11:36:48 -04:00
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
afff30fc0a
Add --cpu to use the cpu for inference.
2023-03-06 10:50:50 -05:00
comfyanonymous
ebfcf0a9c9
Fix issue.
2023-03-03 13:18:01 -05:00
comfyanonymous
fed315a76a
To be really simple CheckpointLoaderSimple should pick the right type.
2023-03-03 11:07:10 -05:00
comfyanonymous
c1f5855ac1
Make some cross attention functions work on the CPU.
2023-03-03 03:27:33 -05:00
comfyanonymous
69cc75fbf8
Add a way to interrupt current processing in the backend.
2023-03-02 14:42:03 -05:00
comfyanonymous
2c5f0ec681
Small adjustment.
2023-02-27 20:04:18 -05:00
comfyanonymous
86721d5158
Enable highvram automatically when vram >> ram
2023-02-27 19:57:39 -05:00
comfyanonymous
2326ff1263
Add: --highvram for when you want models to stay on the vram.
2023-02-17 21:27:02 -05:00
comfyanonymous
d66415c021
Low vram mode for controlnets.
2023-02-17 15:48:16 -05:00
comfyanonymous
4efa67fa12
Add ControlNet support.
2023-02-16 10:38:08 -05:00
comfyanonymous
7e1e193f39
Automatically enable lowvram mode if vram is less than 4GB.
...
Use: --normalvram to disable it.
2023-02-10 00:47:56 -05:00
comfyanonymous
708138c77d
Remove print.
2023-02-08 14:51:18 -05:00
comfyanonymous
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2023-02-08 14:24:00 -05:00
comfyanonymous
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2023-02-08 11:42:37 -05:00
comfyanonymous
534736b924
Add some low vram modes: --lowvram and --novram
2023-02-08 11:37:10 -05:00
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2023-02-08 03:40:43 -05:00