comfyanonymous
62a4b04e7f
Pass extra conds directly to unet.
2023-10-25 00:07:53 -04:00
comfyanonymous
141c4ffcba
Refactor to make it easier to add custom conds to models.
2023-10-24 23:31:12 -04:00
comfyanonymous
f18406d838
Make sure cond_concat is on the right device.
2023-10-19 01:14:25 -04:00
comfyanonymous
5cf44c22ad
Refactor cond_concat into model object.
2023-10-18 16:48:37 -04:00
comfyanonymous
02f4208e1f
Allow having a different pooled output for each image in a batch.
2023-09-21 01:14:42 -04:00
comfyanonymous
e68beb56e4
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
61036397c8
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
21b72ff81b
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
90bfcef833
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
398390a76f
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
601e4a9865
Refactor unclip code.
2023-08-14 23:48:47 -04:00
comfyanonymous
736e2e8f49
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
8a8d8c86d6
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
cb47a5674c
Remove some prints.
2023-07-27 16:12:43 -04:00
comfyanonymous
cea5c2adfb
Add key to indicate checkpoint is v_prediction when saving.
2023-07-18 00:25:53 -04:00
comfyanonymous
debccdc6f9
Refactor of sampler code to deal more easily with different model types.
2023-07-17 01:22:12 -04:00
comfyanonymous
fa8010f038
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
comfyanonymous
73519d4e76
Fix bug.
2023-06-28 00:38:07 -04:00
comfyanonymous
95008c22cd
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
comfyanonymous
a852d8b138
Move latent scale factor from VAE to model.
2023-06-23 02:33:31 -04:00
comfyanonymous
08f1f7686c
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
b46c4e51ed
Refactor unCLIP noise augment out of samplers.py
2023-06-11 04:01:18 -04:00
comfyanonymous
1e0ad9564e
Simpler base model code.
2023-06-09 12:31:16 -04:00