Commit Graph

173 Commits

Author SHA1 Message Date
comfyanonymous
bcdd11d687 Add a way to pass options to the transformers blocks. 2023-03-31 13:04:39 -04:00
comfyanonymous
4b174ce0b4 Fix noise mask not working with > 1 batch size on ksamplers. 2023-03-30 03:50:12 -04:00
comfyanonymous
7e2f4b0897 Split VAE decode batches depending on free memory. 2023-03-29 02:24:37 -04:00
comfyanonymous
7a5ba4593c Fix ddim_uniform crashing with 37 steps. 2023-03-28 16:29:35 -04:00
Francesco Yoshi Gobbo
be201f0511 code cleanup 2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
28e1209e7e no lowvram state if cpu only 2023-03-27 04:51:18 +02:00
comfyanonymous
184ea9fd54 Fix ddim for Mac: #264 2023-03-26 00:36:54 -04:00
comfyanonymous
089cd3adc1 I don't think controlnets were being handled correctly by MPS. 2023-03-24 14:33:16 -04:00
comfyanonymous
4f76d503c7 Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI 2023-03-24 13:56:43 -04:00
Yurii Mazurevich
1206cb3386 Fixed typo 2023-03-24 19:39:55 +02:00
comfyanonymous
5d8c210a1d Make ddim work with --cpu 2023-03-24 11:39:51 -04:00
Yurii Mazurevich
b9b8b1893d Removed unnecessary comment 2023-03-24 14:15:30 +02:00
Yurii Mazurevich
f500d638c6 Added MPS device support 2023-03-24 14:12:56 +02:00
comfyanonymous
121beab586 Support loha that use cp decomposition. 2023-03-23 04:32:25 -04:00
comfyanonymous
d15c9f2c3b Add loha support. 2023-03-23 03:40:12 -04:00
comfyanonymous
7cae5f5769 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
comfyanonymous
d8d65e7c9f Less seams in tiled outputs at the cost of more processing. 2023-03-22 03:29:09 -04:00
comfyanonymous
13ec5cd3c2 Try to improve VAEEncode memory usage a bit. 2023-03-22 02:45:18 -04:00
comfyanonymous
db20d0a9fe Add laptop quadro cards to fp32 list. 2023-03-21 16:57:35 -04:00
comfyanonymous
921f4e14a4 Add support for locon mid weights. 2023-03-21 14:51:51 -04:00
comfyanonymous
29728e1119 Try to fix a vram issue with controlnets. 2023-03-19 10:50:38 -04:00
comfyanonymous
508010d114 Fix area composition feathering not working properly. 2023-03-19 02:00:52 -04:00
comfyanonymous
06c7a9b406 Support multiple paths for embeddings. 2023-03-18 03:08:43 -04:00
comfyanonymous
d5bf8038c8 Merge T2IAdapterLoader and ControlNetLoader.
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
dc8b43e512 Make --cpu have priority over everything else. 2023-03-13 21:30:01 -04:00
comfyanonymous
889689c7b2 use half() on fp16 models loaded with config. 2023-03-13 21:12:48 -04:00
comfyanonymous
edba761868 Use half() function on model when loading in fp16. 2023-03-13 20:58:09 -04:00
comfyanonymous
5be28c4069 Remove omegaconf dependency and some ci changes. 2023-03-13 14:49:18 -04:00
comfyanonymous
1de721b33c Add pytorch attention support to VAE. 2023-03-13 12:45:54 -04:00
comfyanonymous
72b42ab260 --disable-xformers should not even try to import xformers. 2023-03-13 11:36:48 -04:00
comfyanonymous
7c95e1a03b Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
d777b2f4a9 Add a VAEEncodeTiled node. 2023-03-11 15:28:15 -05:00
comfyanonymous
bbdc5924b4 Try to fix memory issue. 2023-03-11 15:15:13 -05:00
comfyanonymous
adabcc74fb Make tiled_scale work for downscaling. 2023-03-11 14:58:55 -05:00
comfyanonymous
77290f7110 Tiled upscaling with the upscale models. 2023-03-11 14:04:13 -05:00
comfyanonymous
66cca3659d Add locon support. 2023-03-09 21:41:24 -05:00
comfyanonymous
fa4dda2550 SD2.x controlnets now work. 2023-03-08 01:13:38 -05:00
comfyanonymous
341ebaabd8 Relative imports to test something. 2023-03-07 11:00:35 -05:00
edikius
148fe7b116 Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
339b2533eb Fix clip_skip no longer being loaded from yaml file. 2023-03-06 11:34:02 -05:00
comfyanonymous
ca7e2e3827 Add --cpu to use the cpu for inference. 2023-03-06 10:50:50 -05:00
comfyanonymous
663fa6eafd Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous
f8f2ea3bb1 Make VAE use common function to get free memory. 2023-03-05 14:20:07 -05:00
comfyanonymous
ffc5c6707b Fix pytorch 2.0 cross attention not working. 2023-03-05 14:14:54 -05:00
comfyanonymous
7ee5e2d40e Add support for new colour T2I adapter model. 2023-03-03 19:13:07 -05:00
comfyanonymous
23be9a77b8 Update T2I adapter code to latest. 2023-03-03 18:46:49 -05:00
comfyanonymous
8141cd7f42 Fix issue. 2023-03-03 13:18:01 -05:00
comfyanonymous
c195dab61c Add a node to set CLIP skip.
Use a more simple way to detect if the model is -v prediction.
2023-03-03 13:04:36 -05:00
comfyanonymous
5608730809 To be really simple CheckpointLoaderSimple should pick the right type. 2023-03-03 11:07:10 -05:00
comfyanonymous
8a2699b47d New CheckpointLoaderSimple to load checkpoints without a config. 2023-03-03 03:37:35 -05:00
comfyanonymous
b2a7f1b32a Make some cross attention functions work on the CPU. 2023-03-03 03:27:33 -05:00
comfyanonymous
666a9e8604 Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
b59b82a73b Add a way to interrupt current processing in the backend. 2023-03-02 14:42:03 -05:00
comfyanonymous
151fed3dfb Hopefully fix a strange issue with xformers + lowvram. 2023-02-28 13:48:52 -05:00
comfyanonymous
a59bb36cb8 Try to improve memory issues with del. 2023-02-28 12:27:43 -05:00
comfyanonymous
1304a8f8ad Small adjustment. 2023-02-27 20:04:18 -05:00
comfyanonymous
bddbd4bdb0 Enable highvram automatically when vram >> ram 2023-02-27 19:57:39 -05:00
comfyanonymous
14c390c0c2 Remove sample_ from some sampler names.
Old workflows will still work.
2023-02-27 01:43:06 -05:00
comfyanonymous
7add9fe7b3 Preparing to add another function to load checkpoints. 2023-02-26 17:29:01 -05:00
comfyanonymous
84a4b1f283 Fix uni_pc sampler not working with 1 or 2 steps. 2023-02-26 04:01:01 -05:00
comfyanonymous
084f8084ed Fix multiple controlnets not working. 2023-02-25 22:12:22 -05:00
comfyanonymous
9f1ca0847b Fixed issue when batched image was used as a controlnet input. 2023-02-25 14:57:28 -05:00
comfyanonymous
7f31625158 Fix missing variable. 2023-02-25 12:19:03 -05:00
comfyanonymous
4a31343a3d Add a T2IAdapterLoader node to load T2I-Adapter models.
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous
5cb9c83936 Prepare for t2i adapter. 2023-02-24 23:36:17 -05:00
comfyanonymous
1dfbc27fb6 Remove some useless imports 2023-02-24 12:36:55 -05:00
comfyanonymous
a4778b6a31 Added an experimental VAEDecodeTiled.
This decodes the image with the VAE in tiles which should be faster and
use less vram.

It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2023-02-24 02:10:10 -05:00
comfyanonymous
f1b122ba02 Add a node to load diff controlnets. 2023-02-22 23:22:03 -05:00
comfyanonymous
2e62af0a30 Implement DDIM sampler. 2023-02-22 21:10:19 -05:00
comfyanonymous
a17765cf94 Uni_PC: make max denoise behave more like other samplers.
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2023-02-22 02:21:06 -05:00
comfyanonymous
2124888a6c Remove prints that are useless when xformers is enabled. 2023-02-21 22:16:13 -05:00
comfyanonymous
00ef1aabeb Add uni_pc bh2 variant. 2023-02-21 16:11:48 -05:00
comfyanonymous
9cd0189e7e ControlNetApply now stacks.
It can be used to apply multiple control nets at the same time.
2023-02-21 01:18:53 -05:00
comfyanonymous
69df07177d Support old pytorch. 2023-02-19 16:59:03 -05:00
comfyanonymous
a9207a2c8e Support people putting commas after the embedding name in the prompt. 2023-02-19 02:50:48 -05:00
comfyanonymous
0f13853bd2 Add: --highvram for when you want models to stay on the vram. 2023-02-17 21:27:02 -05:00
comfyanonymous
273a3ebc67 Fix an OOM issue. 2023-02-17 16:21:01 -05:00
comfyanonymous
7dd65e43dd Low vram mode for controlnets. 2023-02-17 15:48:16 -05:00
comfyanonymous
7010bf9e60 Use fp16 for fp16 control nets. 2023-02-17 15:31:38 -05:00
comfyanonymous
9999b1e198 Add a way to control controlnet strength. 2023-02-16 18:08:01 -05:00
comfyanonymous
b5b68268ee Add ControlNet support. 2023-02-16 10:38:08 -05:00
comfyanonymous
6d0f92bcb2 Use inpaint models the proper way by using VAEEncodeForInpaint. 2023-02-15 20:44:51 -05:00
comfyanonymous
886b269d42 Support for inpaint models. 2023-02-15 16:38:20 -05:00
comfyanonymous
ce4e2f2955 Add masks to samplers code for inpainting. 2023-02-15 13:16:38 -05:00
comfyanonymous
e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2023-02-13 12:29:21 -05:00
comfyanonymous
f542f248f1 Show the right amount of steps in the progress bar for uni_pc.
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
comfyanonymous
f10b8948c3 768-v support for uni_pc sampler. 2023-02-11 04:34:58 -05:00
comfyanonymous
ce0aeb109e Remove print. 2023-02-11 03:41:40 -05:00
comfyanonymous
5489d5af04 Add uni_pc sampler to KSampler* nodes. 2023-02-11 03:34:09 -05:00
comfyanonymous
1a4edd19cd Fix overflow issue with inplace softmax. 2023-02-10 11:47:41 -05:00
comfyanonymous
509c7dfc6d Use real softmax in split op to fix issue with some images. 2023-02-10 03:13:49 -05:00
comfyanonymous
7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB.
Use: --normalvram to disable it.
2023-02-10 00:47:56 -05:00
comfyanonymous
324273fff2 Fix embedding not working when on new line. 2023-02-09 14:12:02 -05:00
comfyanonymous
1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2023-02-09 13:47:36 -05:00
comfyanonymous
773cdabfce Same thing but for the other places where it's used. 2023-02-09 12:43:29 -05:00
comfyanonymous
df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2023-02-09 12:33:27 -05:00
comfyanonymous
e8c499ddd4 Split optimization for VAE attention block. 2023-02-08 22:04:20 -05:00
comfyanonymous
5b4e312749 Use inplace operations for less OOM issues. 2023-02-08 22:04:13 -05:00
comfyanonymous
3fd87cbd21 Slightly smarter batching behaviour.
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
comfyanonymous
bbdcf0b737 Use relative imports for k_diffusion. 2023-02-08 16:51:19 -05:00