comfyanonymous
ccc2f830d7
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
e68beb56e4
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
2c2ba07ff2
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
4fb6163a21
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
a2602fc4f9
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
comfyanonymous
145e279e6c
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
74d1dfb0ad
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
e340ef7852
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
1aff0360c3
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
ed16480867
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
comfyanonymous
b168bdf3e5
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
comfyanonymous
08af73e450
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
f29b9306fd
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
comfyanonymous
b982fd039e
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
819c4a42d3
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
comfyanonymous
225a5f9f1f
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
comfyanonymous
280659a6ee
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
e075077ad8
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
comfyanonymous
a216b56591
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
1a21a2271e
Support diffusers mini controlnets.
2023-08-16 12:28:01 -04:00
comfyanonymous
a2d5028ad8
Support for yet another lora type based on diffusers.
2023-08-11 13:04:21 -04:00
comfyanonymous
c5a20b5c85
Support diffuser text encoder loras.
2023-08-10 20:28:28 -04:00
comfyanonymous
531ea9c1aa
Detect hint_channels from controlnet.
2023-08-06 14:08:59 -04:00
comfyanonymous
18c180b727
Support loras in diffusers format.
2023-08-05 01:40:24 -04:00
comfyanonymous
8a8d8c86d6
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
5d64d20ef5
Fix some new loras.
2023-07-25 16:39:15 -04:00
comfyanonymous
2c22729cec
Start is now 0.0 and end is now 1.0 for the timestep ranges.
2023-07-24 18:38:17 -04:00
comfyanonymous
857a80b857
ControlNetApplyAdvanced can now define when controlnet gets applied.
2023-07-24 17:50:49 -04:00
comfyanonymous
aa8fde7d6b
Try to fix memory issue with lora.
2023-07-22 21:38:56 -04:00
comfyanonymous
676b635914
Del the right object when applying lora.
2023-07-22 11:25:49 -04:00
comfyanonymous
bfb95012c7
Support controlnet in diffusers format.
2023-07-21 22:58:16 -04:00
comfyanonymous
c8a5d3363c
Fix issue with lora in some cases when combined with model merging.
2023-07-21 21:27:27 -04:00
comfyanonymous
be012f4b28
Properly support SDXL diffusers unet with UNETLoader node.
2023-07-21 14:38:56 -04:00
comfyanonymous
e2c3256a19
Print errors and continue when lora weights are not compatible.
2023-07-20 19:56:22 -04:00
comfyanonymous
debccdc6f9
Refactor of sampler code to deal more easily with different model types.
2023-07-17 01:22:12 -04:00
comfyanonymous
ba6e888eb9
Lower lora ram usage when in normal vram mode.
2023-07-16 02:59:04 -04:00
comfyanonymous
73c2afbe44
Speed up lora loading a bit.
2023-07-15 13:25:22 -04:00
comfyanonymous
f67f1c99b8
Fix CLIPSetLastLayer not reverting when removed.
2023-07-15 01:41:21 -04:00
comfyanonymous
26da73b600
Reduce floating point rounding errors in loras.
2023-07-15 00:53:00 -04:00
comfyanonymous
daac253452
Add a node to merge CLIP models.
2023-07-14 02:41:18 -04:00
comfyanonymous
c1c170f61a
Don't patch weights when multiplier is zero.
2023-07-09 17:46:56 -04:00
comfyanonymous
c5779f04aa
Fix merging not working when model2 of model merge node was a merge.
2023-07-08 22:31:10 -04:00
comfyanonymous
9caaa09c71
Add arguments to run the VAE in fp16 or bf16 for testing.
2023-07-06 23:23:46 -04:00
comfyanonymous
fa8010f038
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
comfyanonymous
2ff6108df3
Support loading unet files in diffusers format.
2023-07-05 17:38:59 -04:00
comfyanonymous
60bdf7c00b
Properly support SDXL diffusers loras for unet.
2023-07-04 21:15:23 -04:00
comfyanonymous
fcee7e88db
Pass device to CLIP model.
2023-07-03 16:09:37 -04:00