comfyanonymous
873b08bd0f
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
d35c0ce6ec
All the unet weights should now be initialized with the right dtype.
2023-06-15 18:42:30 -04:00
comfyanonymous
8e6cbe6270
Initialize more unet weights as the right dtype.
2023-06-15 15:00:10 -04:00
comfyanonymous
206af9a315
Initialize transformer unet block weights in right dtype at the start.
2023-06-15 14:29:26 -04:00
comfyanonymous
6bbd86a061
Disable default weight values in unet conv2d for faster loading.
2023-06-14 19:46:08 -04:00
comfyanonymous
46a1fd77f4
This isn't needed for inference.
2023-06-14 13:05:08 -04:00
comfyanonymous
14508d7a5b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous
22cd647be0
Remove more useless files.
2023-06-13 02:22:19 -04:00
comfyanonymous
ce4e360edf
Cleanup: Remove a bunch of useless files.
2023-06-13 02:19:08 -04:00
comfyanonymous
c0a5444d1b
Make scaled_dot_product switch to sliced attention on OOM.
2023-05-20 16:01:02 -04:00
comfyanonymous
e33cb62d1b
Simplify and improve some vae attention code.
2023-05-20 15:07:21 -04:00
BlenderNeko
2e2c17131b
minor changes for tiled sampler
2023-05-12 23:49:09 +02:00
comfyanonymous
b877fefbb3
Lowvram mode for gligen and fix some lowvram issues.
2023-05-05 18:11:41 -04:00
comfyanonymous
2edaaba3c2
Fix imports.
2023-05-04 18:10:29 -04:00
comfyanonymous
71b4d08e65
Change latent resolution step to 8.
2023-05-02 14:17:51 -04:00
comfyanonymous
76730ed9ff
Make unet work with any input shape.
2023-05-02 13:31:43 -04:00
comfyanonymous
e6771d0986
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
0d66023475
This makes pytorch2.0 attention perform a bit faster.
2023-04-22 14:30:39 -04:00
comfyanonymous
6c156642e4
Add support for GLIGEN textbox model.
2023-04-19 11:06:32 -04:00
comfyanonymous
4df70d0f62
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
EllangoK
cd00b46465
seperates out arg parser and imports args
2023-04-05 23:41:23 -04:00
comfyanonymous
7597a5d83e
Disable xformers in VAE when xformers == 0.0.18
2023-04-04 22:22:02 -04:00
comfyanonymous
0d1c6a3934
Pull latest tomesd code from upstream.
2023-04-03 15:49:28 -04:00
comfyanonymous
b55667284c
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
3567c01bc3
This seems to give better quality in tome.
2023-03-31 18:36:18 -04:00
comfyanonymous
7da8d5f9f5
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
bcdd11d687
Add a way to pass options to the transformers blocks.
2023-03-31 13:04:39 -04:00
comfyanonymous
7cae5f5769
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
13ec5cd3c2
Try to improve VAEEncode memory usage a bit.
2023-03-22 02:45:18 -04:00
comfyanonymous
5be28c4069
Remove omegaconf dependency and some ci changes.
2023-03-13 14:49:18 -04:00
comfyanonymous
1de721b33c
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00
comfyanonymous
72b42ab260
--disable-xformers should not even try to import xformers.
2023-03-13 11:36:48 -04:00
comfyanonymous
7c95e1a03b
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
bbdc5924b4
Try to fix memory issue.
2023-03-11 15:15:13 -05:00
edikius
148fe7b116
Fixed import ( #44 )
...
* fixed import error
I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app
* Update main.py
* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
f8f2ea3bb1
Make VAE use common function to get free memory.
2023-03-05 14:20:07 -05:00
comfyanonymous
ffc5c6707b
Fix pytorch 2.0 cross attention not working.
2023-03-05 14:14:54 -05:00
comfyanonymous
b2a7f1b32a
Make some cross attention functions work on the CPU.
2023-03-03 03:27:33 -05:00
comfyanonymous
666a9e8604
Add some pytorch scaled_dot_product_attention code for testing.
...
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
151fed3dfb
Hopefully fix a strange issue with xformers + lowvram.
2023-02-28 13:48:52 -05:00
comfyanonymous
5cb9c83936
Prepare for t2i adapter.
2023-02-24 23:36:17 -05:00
comfyanonymous
2124888a6c
Remove prints that are useless when xformers is enabled.
2023-02-21 22:16:13 -05:00
comfyanonymous
273a3ebc67
Fix an OOM issue.
2023-02-17 16:21:01 -05:00
comfyanonymous
b5b68268ee
Add ControlNet support.
2023-02-16 10:38:08 -05:00
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2023-02-10 11:47:41 -05:00
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2023-02-10 03:13:49 -05:00
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2023-02-09 13:47:36 -05:00
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2023-02-09 12:33:27 -05:00
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2023-02-08 22:04:20 -05:00