comfyanonymous
01a6f9b116
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
280659a6ee
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
8a8d8c86d6
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
4ec7f09adc
It's actually possible to torch.compile the unet now.
2023-07-18 21:36:35 -04:00
comfyanonymous
fa8010f038
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
comfyanonymous
033dc1f52a
Cleanup.
2023-07-02 11:58:23 -04:00
comfyanonymous
391ee8d21f
Fix bug with controlnet.
2023-06-24 11:02:38 -04:00
comfyanonymous
0db33017af
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
08f1f7686c
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
d72d5d49f5
Add original_shape parameter to transformer patch extra_options.
2023-06-21 13:22:01 -04:00
comfyanonymous
873b08bd0f
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
d35c0ce6ec
All the unet weights should now be initialized with the right dtype.
2023-06-15 18:42:30 -04:00
comfyanonymous
206af9a315
Initialize transformer unet block weights in right dtype at the start.
2023-06-15 14:29:26 -04:00
comfyanonymous
46a1fd77f4
This isn't needed for inference.
2023-06-14 13:05:08 -04:00
comfyanonymous
14508d7a5b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous
b877fefbb3
Lowvram mode for gligen and fix some lowvram issues.
2023-05-05 18:11:41 -04:00
comfyanonymous
2edaaba3c2
Fix imports.
2023-05-04 18:10:29 -04:00
comfyanonymous
e6771d0986
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
0d66023475
This makes pytorch2.0 attention perform a bit faster.
2023-04-22 14:30:39 -04:00
comfyanonymous
6c156642e4
Add support for GLIGEN textbox model.
2023-04-19 11:06:32 -04:00
comfyanonymous
4df70d0f62
Fix model_management import so it doesn't get executed twice.
2023-04-15 19:04:33 -04:00
EllangoK
cd00b46465
seperates out arg parser and imports args
2023-04-05 23:41:23 -04:00
comfyanonymous
7da8d5f9f5
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
bcdd11d687
Add a way to pass options to the transformers blocks.
2023-03-31 13:04:39 -04:00
comfyanonymous
7cae5f5769
Try again with vae tiled decoding if regular fails because of OOM.
2023-03-22 14:49:00 -04:00
comfyanonymous
1de721b33c
Add pytorch attention support to VAE.
2023-03-13 12:45:54 -04:00
comfyanonymous
72b42ab260
--disable-xformers should not even try to import xformers.
2023-03-13 11:36:48 -04:00
comfyanonymous
7c95e1a03b
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
ffc5c6707b
Fix pytorch 2.0 cross attention not working.
2023-03-05 14:14:54 -05:00
comfyanonymous
b2a7f1b32a
Make some cross attention functions work on the CPU.
2023-03-03 03:27:33 -05:00
comfyanonymous
666a9e8604
Add some pytorch scaled_dot_product_attention code for testing.
...
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
151fed3dfb
Hopefully fix a strange issue with xformers + lowvram.
2023-02-28 13:48:52 -05:00
comfyanonymous
2124888a6c
Remove prints that are useless when xformers is enabled.
2023-02-21 22:16:13 -05:00
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00
comfyanonymous
50db297cf6
Try to fix OOM issues with cards that have less vram than mine.
2023-01-29 00:50:46 -05:00
comfyanonymous
051f472e8f
Fix sub quadratic attention for SD2 and make it the default optimization.
2023-01-25 01:22:43 -05:00
comfyanonymous
220afe3310
Initial commit.
2023-01-16 22:37:14 -05:00