comfyanonymous
790073a21d
Move unet to device right after loading on highvram mode.
2023-06-29 20:43:06 -04:00
comfyanonymous
a9b4d721c0
Remove useless code.
2023-06-29 00:26:33 -04:00
comfyanonymous
edfb14822e
This is unused but it should be 1280.
2023-06-28 18:04:23 -04:00
comfyanonymous
ae27a5625e
Support for SDXL text encoder lora.
2023-06-28 02:22:49 -04:00
comfyanonymous
73519d4e76
Fix bug.
2023-06-28 00:38:07 -04:00
comfyanonymous
7b13cacfea
Use pytorch attention by default on nvidia when xformers isn't present.
...
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
comfyanonymous
95008c22cd
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
0254a9d75b
Support loras based on the stability unet implementation.
2023-06-26 02:56:11 -04:00
comfyanonymous
44e3d3caed
Fix ddim + inpainting not working.
2023-06-26 00:48:48 -04:00
comfyanonymous
a098d7e4d2
Set the seed in the SDE samplers to make them more reproducible.
2023-06-25 03:04:57 -04:00
comfyanonymous
6d80c5ed30
Add support for TAESD decoder for SDXL.
2023-06-25 02:38:14 -04:00
comfyanonymous
c4c2db6ead
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
818bae8e52
Fix CLIPLoader node.
2023-06-24 13:56:46 -04:00
comfyanonymous
391ee8d21f
Fix bug with controlnet.
2023-06-24 11:02:38 -04:00
comfyanonymous
0db33017af
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
comfyanonymous
51da619d73
Remove useless code.
2023-06-23 12:35:26 -04:00
comfyanonymous
a852d8b138
Move latent scale factor from VAE to model.
2023-06-23 02:33:31 -04:00
comfyanonymous
288f0c430d
Fix bug when yaml config has no clip params.
2023-06-23 01:12:59 -04:00
comfyanonymous
40f218c4fa
Fix error with ClipVision loader node.
2023-06-23 01:08:05 -04:00
comfyanonymous
4ed8aea1a1
Don't merge weights when shapes don't match and print a warning.
2023-06-22 19:08:31 -04:00
comfyanonymous
08f1f7686c
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
d72d5d49f5
Add original_shape parameter to transformer patch extra_options.
2023-06-21 13:22:01 -04:00
comfyanonymous
cd8d0b73c5
Fix last commits causing an issue with the text encoder lora.
2023-06-20 19:44:39 -04:00
comfyanonymous
af9e05f389
Keep a set of model_keys for faster add_patches.
2023-06-20 19:08:48 -04:00
comfyanonymous
2c71c47ff9
Add a type of model patch useful for model merging.
2023-06-20 17:34:11 -04:00
comfyanonymous
5384849e5e
Fix k_diffusion math being off by a tiny bit during txt2img.
2023-06-19 15:28:54 -04:00
comfyanonymous
873b08bd0f
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
comfyanonymous
26bce56c3b
pop clip vision keys after loading them.
2023-06-18 21:21:17 -04:00
comfyanonymous
c1e1b00941
Not needed anymore.
2023-06-18 13:06:59 -04:00
comfyanonymous
78eabd0fd4
This is not needed anymore and causes issues with alphas_cumprod.
2023-06-18 03:18:25 -04:00
comfyanonymous
351d778487
Fix DDIM v-prediction.
2023-06-17 20:48:21 -04:00
comfyanonymous
a93a785f9e
Fix an issue when alphas_comprod are half floats.
2023-06-16 17:16:51 -04:00
comfyanonymous
d35c0ce6ec
All the unet weights should now be initialized with the right dtype.
2023-06-15 18:42:30 -04:00
comfyanonymous
282638b813
Add a --gpu-only argument to keep and run everything on the GPU.
...
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
8e6cbe6270
Initialize more unet weights as the right dtype.
2023-06-15 15:00:10 -04:00
comfyanonymous
206af9a315
Initialize transformer unet block weights in right dtype at the start.
2023-06-15 14:29:26 -04:00
comfyanonymous
16c535a309
Properly disable weight initialization in clip models.
2023-06-14 20:13:08 -04:00
comfyanonymous
6bbd86a061
Disable default weight values in unet conv2d for faster loading.
2023-06-14 19:46:08 -04:00
comfyanonymous
46a1fd77f4
This isn't needed for inference.
2023-06-14 13:05:08 -04:00
comfyanonymous
54c3f45d58
Don't initialize CLIPVision weights to default values.
2023-06-14 12:57:02 -04:00
comfyanonymous
0a80429175
Set model to fp16 before loading the state dict to lower ram bump.
2023-06-14 12:48:02 -04:00
comfyanonymous
3e75be0a92
Don't initialize clip weights to default values.
2023-06-14 12:47:36 -04:00
comfyanonymous
14508d7a5b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
2023-06-14 12:09:41 -04:00
comfyanonymous
fd8fa51a6d
sampler_cfg_function now uses a dict for the argument.
...
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
comfyanonymous
e4e6a23fec
Turn on safe load for a few models.
2023-06-13 10:12:03 -04:00
comfyanonymous
27c98d1d24
Remove pytorch_lightning dependency.
2023-06-13 10:11:33 -04:00
comfyanonymous
807faa8edd
Remove useless code.
2023-06-13 02:40:58 -04:00
comfyanonymous
22cd647be0
Remove more useless files.
2023-06-13 02:22:19 -04:00
comfyanonymous
ce4e360edf
Cleanup: Remove a bunch of useless files.
2023-06-13 02:19:08 -04:00
comfyanonymous
a2bc14b56f
Split the batch in VAEEncode if there's not enough memory.
2023-06-12 00:21:50 -04:00