Commit Graph

170 Commits

Author SHA1 Message Date
comfyanonymous
d752c5f236 Make VAE memory estimation take dtype into account. 2023-11-22 18:17:19 -05:00
comfyanonymous
212bb0e108 Allow model config to preprocess the vae state dict on load. 2023-11-21 16:29:18 -05:00
comfyanonymous
eeceef948d Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
c013b8e94c Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
6e99d21369 Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous
cdfb16b654 Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous
9533904e39 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous
d4bc91d58f Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous
23abd3ec84 Fix some potential issues. 2023-10-18 19:48:36 -04:00
comfyanonymous
459787f78f Make VAE code closer to sgm. 2023-10-17 15:18:51 -04:00
comfyanonymous
728139a5b9 Refactor code so model can be a dtype other than fp32 or fp16. 2023-10-13 14:41:17 -04:00
comfyanonymous
4456b4d137 load_checkpoint_guess_config can now optionally output the model. 2023-10-06 13:48:18 -04:00
City
ee2d8e46e1 Fix quality loss due to low precision 2023-10-04 15:40:59 +02:00
comfyanonymous
c59ff7dc9b Print missing VAE keys. 2023-09-28 00:54:57 -04:00
comfyanonymous
ccc2f830d7 Add ldm format support to UNETLoader. 2023-09-11 16:36:50 -04:00
comfyanonymous
e68beb56e4 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00
comfyanonymous
2c2ba07ff2 Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous
4fb6163a21 Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous
a2602fc4f9 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous
145e279e6c Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous
74d1dfb0ad Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous
e340ef7852 Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous
1aff0360c3 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557 Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
ed16480867 All resolutions now work with t2i adapter for SDXL. 2023-08-22 16:23:54 -04:00
comfyanonymous
b168bdf3e5 T2I adapter SDXL. 2023-08-22 14:40:43 -04:00
comfyanonymous
08af73e450 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous
f29b9306fd Fix control lora not working in fp32. 2023-08-21 20:38:31 -04:00
comfyanonymous
b982fd039e Fix ControlLora on lowvram. 2023-08-21 00:54:04 -04:00
comfyanonymous
819c4a42d3 Remove autocast from controlnet code. 2023-08-20 21:47:32 -04:00
comfyanonymous
225a5f9f1f Free more memory before VAE encode/decode. 2023-08-19 12:13:13 -04:00
comfyanonymous
280659a6ee Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
e075077ad8 Fix potential issues with patching models when saving checkpoints. 2023-08-17 11:07:08 -04:00
comfyanonymous
a216b56591 Smarter memory management.
Try to keep models on the vram when possible.

Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
1a21a2271e Support diffusers mini controlnets. 2023-08-16 12:28:01 -04:00
comfyanonymous
a2d5028ad8 Support for yet another lora type based on diffusers. 2023-08-11 13:04:21 -04:00
comfyanonymous
c5a20b5c85 Support diffuser text encoder loras. 2023-08-10 20:28:28 -04:00
comfyanonymous
531ea9c1aa Detect hint_channels from controlnet. 2023-08-06 14:08:59 -04:00
comfyanonymous
18c180b727 Support loras in diffusers format. 2023-08-05 01:40:24 -04:00
comfyanonymous
8a8d8c86d6 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00
comfyanonymous
5d64d20ef5 Fix some new loras. 2023-07-25 16:39:15 -04:00
comfyanonymous
2c22729cec Start is now 0.0 and end is now 1.0 for the timestep ranges. 2023-07-24 18:38:17 -04:00
comfyanonymous
857a80b857 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous
aa8fde7d6b Try to fix memory issue with lora. 2023-07-22 21:38:56 -04:00
comfyanonymous
676b635914 Del the right object when applying lora. 2023-07-22 11:25:49 -04:00
comfyanonymous
bfb95012c7 Support controlnet in diffusers format. 2023-07-21 22:58:16 -04:00
comfyanonymous
c8a5d3363c Fix issue with lora in some cases when combined with model merging. 2023-07-21 21:27:27 -04:00
comfyanonymous
be012f4b28 Properly support SDXL diffusers unet with UNETLoader node. 2023-07-21 14:38:56 -04:00