Commit Graph

338 Commits

Author SHA1 Message Date
comfyanonymous
391ee8d21f Fix bug with controlnet. 2023-06-24 11:02:38 -04:00
comfyanonymous
0db33017af Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
51da619d73 Remove useless code. 2023-06-23 12:35:26 -04:00
comfyanonymous
a852d8b138 Move latent scale factor from VAE to model. 2023-06-23 02:33:31 -04:00
comfyanonymous
288f0c430d Fix bug when yaml config has no clip params. 2023-06-23 01:12:59 -04:00
comfyanonymous
40f218c4fa Fix error with ClipVision loader node. 2023-06-23 01:08:05 -04:00
comfyanonymous
4ed8aea1a1 Don't merge weights when shapes don't match and print a warning. 2023-06-22 19:08:31 -04:00
comfyanonymous
08f1f7686c Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
d72d5d49f5 Add original_shape parameter to transformer patch extra_options. 2023-06-21 13:22:01 -04:00
comfyanonymous
cd8d0b73c5 Fix last commits causing an issue with the text encoder lora. 2023-06-20 19:44:39 -04:00
comfyanonymous
af9e05f389 Keep a set of model_keys for faster add_patches. 2023-06-20 19:08:48 -04:00
comfyanonymous
2c71c47ff9 Add a type of model patch useful for model merging. 2023-06-20 17:34:11 -04:00
comfyanonymous
5384849e5e Fix k_diffusion math being off by a tiny bit during txt2img. 2023-06-19 15:28:54 -04:00
comfyanonymous
873b08bd0f Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
26bce56c3b pop clip vision keys after loading them. 2023-06-18 21:21:17 -04:00
comfyanonymous
c1e1b00941 Not needed anymore. 2023-06-18 13:06:59 -04:00
comfyanonymous
78eabd0fd4 This is not needed anymore and causes issues with alphas_cumprod. 2023-06-18 03:18:25 -04:00
comfyanonymous
351d778487 Fix DDIM v-prediction. 2023-06-17 20:48:21 -04:00
comfyanonymous
a93a785f9e Fix an issue when alphas_comprod are half floats. 2023-06-16 17:16:51 -04:00
comfyanonymous
d35c0ce6ec All the unet weights should now be initialized with the right dtype. 2023-06-15 18:42:30 -04:00
comfyanonymous
282638b813 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
8e6cbe6270 Initialize more unet weights as the right dtype. 2023-06-15 15:00:10 -04:00
comfyanonymous
206af9a315 Initialize transformer unet block weights in right dtype at the start. 2023-06-15 14:29:26 -04:00
comfyanonymous
16c535a309 Properly disable weight initialization in clip models. 2023-06-14 20:13:08 -04:00
comfyanonymous
6bbd86a061 Disable default weight values in unet conv2d for faster loading. 2023-06-14 19:46:08 -04:00
comfyanonymous
46a1fd77f4 This isn't needed for inference. 2023-06-14 13:05:08 -04:00
comfyanonymous
54c3f45d58 Don't initialize CLIPVision weights to default values. 2023-06-14 12:57:02 -04:00
comfyanonymous
0a80429175 Set model to fp16 before loading the state dict to lower ram bump. 2023-06-14 12:48:02 -04:00
comfyanonymous
3e75be0a92 Don't initialize clip weights to default values. 2023-06-14 12:47:36 -04:00
comfyanonymous
14508d7a5b Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous
fd8fa51a6d sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous
e4e6a23fec Turn on safe load for a few models. 2023-06-13 10:12:03 -04:00
comfyanonymous
27c98d1d24 Remove pytorch_lightning dependency. 2023-06-13 10:11:33 -04:00
comfyanonymous
807faa8edd Remove useless code. 2023-06-13 02:40:58 -04:00
comfyanonymous
22cd647be0 Remove more useless files. 2023-06-13 02:22:19 -04:00
comfyanonymous
ce4e360edf Cleanup: Remove a bunch of useless files. 2023-06-13 02:19:08 -04:00
comfyanonymous
a2bc14b56f Split the batch in VAEEncode if there's not enough memory. 2023-06-12 00:21:50 -04:00
comfyanonymous
8ef4de36f4 Auto switch to tiled VAE encode if regular one runs out of memory. 2023-06-11 23:25:39 -04:00
comfyanonymous
b46c4e51ed Refactor unCLIP noise augment out of samplers.py 2023-06-11 04:01:18 -04:00
comfyanonymous
1e0ad9564e Simpler base model code. 2023-06-09 12:31:16 -04:00
comfyanonymous
cbf4192f8d Fix bug when embedding gets ignored because of mismatched size. 2023-06-08 23:48:14 -04:00
comfyanonymous
6c6ef17bd4 Small refactor. 2023-06-06 13:23:01 -04:00
comfyanonymous
d421e5d610 Refactor previews into one command line argument.
Clean up a few things.
2023-06-06 02:13:05 -04:00
space-nuko
2a03db6500 preview method autodetection 2023-06-05 18:59:10 -05:00
space-nuko
8b515aae12 Add latent2rgb preview 2023-06-05 18:39:56 -05:00
space-nuko
a3b400faa4 Make previews into cli option 2023-06-05 13:19:02 -05:00
space-nuko
a816ca9091 Preview sampled images with TAESD 2023-06-05 09:20:17 -05:00
comfyanonymous
442430dcef Some comments to say what the vram state options mean. 2023-06-04 17:51:04 -04:00
comfyanonymous
e136e86a13 Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
2023-06-03 11:05:37 -04:00
comfyanonymous
d7821166b2 Implement global average pooling for controlnet. 2023-06-03 01:49:03 -04:00
comfyanonymous
6b80950a41 Refactor and improve model_management code related to free memory. 2023-06-02 15:21:33 -04:00
space-nuko
7cb90ba509 More accurate total 2023-06-02 00:14:41 -05:00
space-nuko
22b707f1cf System stats endpoint 2023-06-01 23:26:23 -05:00
comfyanonymous
f929d8df00 Tweak lowvram model memory so it's closer to what it was before. 2023-06-01 04:04:35 -04:00
comfyanonymous
f6f0a25226 Empty cache on mps. 2023-06-01 03:52:51 -04:00
comfyanonymous
fa734798f1 This is useless for inference. 2023-05-31 13:03:24 -04:00
comfyanonymous
4af4fe017b Auto load model in lowvram if not enough memory. 2023-05-30 12:36:41 -04:00
comfyanonymous
ebf6e2c421 Add route to get safetensors metadata:
/view_metadata/loras?filename=lora.safetensors
2023-05-29 02:48:50 -04:00
comfyanonymous
12e275ab18 Support VAEs in diffusers format. 2023-05-28 02:02:09 -04:00
comfyanonymous
c7572ef29d Refactor diffusers model convert code to be able to reuse it. 2023-05-28 01:55:40 -04:00
comfyanonymous
3d8f70ac86 Remove einops. 2023-05-25 18:42:56 -04:00
comfyanonymous
ee30188247 Do operations in same order as the one it replaces. 2023-05-25 18:31:27 -04:00
comfyanonymous
8359d8589b Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-05-25 14:44:16 -04:00
comfyanonymous
4a2e5b0950 Support old pytorch versions that don't have weights_only. 2023-05-25 13:30:59 -04:00
BlenderNeko
0ea63a0aeb vecorized bislerp 2023-05-25 19:23:47 +02:00
comfyanonymous
30152c282e Various improvements to bislerp. 2023-05-23 11:40:24 -04:00
comfyanonymous
f166739922 Add experimental bislerp algorithm for latent upscaling.
It's like bilinear but with slerp.
2023-05-23 03:12:56 -04:00
comfyanonymous
ab51137f21 Auto transpose images from exif data. 2023-05-22 00:22:24 -04:00
comfyanonymous
8781b6a542 sample_dpmpp_2m_sde no longer crashes when step == 1. 2023-05-21 11:34:29 -04:00
comfyanonymous
6dd63b638e Add DPM-Solver++(2M) SDE and exponential scheduler.
exponential scheduler is the one recommended with this sampler.
2023-05-21 01:46:03 -04:00
comfyanonymous
c0a5444d1b Make scaled_dot_product switch to sliced attention on OOM. 2023-05-20 16:01:02 -04:00
comfyanonymous
e33cb62d1b Simplify and improve some vae attention code. 2023-05-20 15:07:21 -04:00
comfyanonymous
8035000551 Switch default scheduler to normal. 2023-05-15 00:29:56 -04:00
comfyanonymous
18f2a99ea2 Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI 2023-05-14 15:39:39 -04:00
comfyanonymous
8c539fa5bc Print the torch device that is used on startup. 2023-05-13 17:11:27 -04:00
BlenderNeko
dd1be4e992 Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
BlenderNeko
3cbb3ef058 comment out annoying print statement 2023-05-12 23:57:40 +02:00
BlenderNeko
2e2c17131b minor changes for tiled sampler 2023-05-12 23:49:09 +02:00
comfyanonymous
ffa3282218 Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
comfyanonymous
9102a82f07 Not needed anymore because sampling works with any latent size. 2023-05-09 12:18:18 -04:00
comfyanonymous
9110a44cc3 Make t2i adapter work with any latent resolution. 2023-05-08 18:15:19 -04:00
comfyanonymous
bdb5f4ff03 Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI 2023-05-07 17:19:03 -04:00
comfyanonymous
f7e427c557 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
comfyanonymous
57f35b3d16 maximum_batch_area for xformers.
Remove useless code.
2023-05-06 19:28:46 -04:00
EllangoK
0e90bee158 auto-launch cli arg 2023-05-06 16:59:40 -04:00
comfyanonymous
b877fefbb3 Lowvram mode for gligen and fix some lowvram issues. 2023-05-05 18:11:41 -04:00
comfyanonymous
a6d2a9487d Search recursively in subfolders for embeddings. 2023-05-05 01:28:48 -04:00
comfyanonymous
93fc8c1ea9 Fix import. 2023-05-05 00:19:35 -04:00
comfyanonymous
2edaaba3c2 Fix imports. 2023-05-04 18:10:29 -04:00
comfyanonymous
10ff210ffb Refactor. 2023-05-03 17:48:35 -04:00
comfyanonymous
bf7c588c68 Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI 2023-05-03 16:24:56 -04:00
pythongosssss
eaeac55c0d remove unused import 2023-05-03 18:21:23 +01:00
pythongosssss
d8017626fb use comfy progress bar 2023-05-03 18:19:22 +01:00
comfyanonymous
ad56cfe814 Add a total_steps value to sampler callback. 2023-05-03 12:58:10 -04:00
pythongosssss
f6154607f9 Merge remote-tracking branch 'origin/master' into tiled-progress 2023-05-03 17:33:42 +01:00
pythongosssss
8160309db9 reduce duplication 2023-05-03 17:33:19 +01:00
comfyanonymous
df01203955 Use sampler callback instead of tqdm hook for progress bar. 2023-05-02 23:00:49 -04:00
pythongosssss
33b0ba6464 added progress to encode + upscale 2023-05-02 19:18:07 +01:00
comfyanonymous
71b4d08e65 Change latent resolution step to 8. 2023-05-02 14:17:51 -04:00
comfyanonymous
76730ed9ff Make unet work with any input shape. 2023-05-02 13:31:43 -04:00