comfyanonymous
a3e0950ffb
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
c8905151e3
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
36cc11edbd
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
ccc2f830d7
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
4eef469698
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
c4c69f8cc3
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
c40f363254
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
b58288668f
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
ef0c0892f6
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
2f03d19888
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
f63ffd1bb1
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
809db6d52f
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
1148c2dec7
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
9f5ce6652b
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
e68beb56e4
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
b8f3570a1b
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
39ca2da00c
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
a0578c5470
Fix controlnet issue.
2023-08-31 15:16:58 -04:00
comfyanonymous
61036397c8
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
845faf8e51
Clean up DiffusersLoader node.
2023-08-30 12:57:07 -04:00
Simon Lui
c902fd7505
Fix error message in model_patcher.py
...
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
2c2ba07ff2
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
0482f057e0
Support SDXL t2i adapters with 3 channel input.
2023-08-29 16:44:57 -04:00
comfyanonymous
21b72ff81b
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
2ab478346d
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
5da23d7f05
No need to check filename extensions to detect shuffle controlnet.
2023-08-28 16:49:06 -04:00
comfyanonymous
e4957ff97e
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
256eb57284
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
4fb6163a21
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
daae7db069
Implement loras with norm keys.
2023-08-28 11:20:06 -04:00
comfyanonymous
ae3f7060d8
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
6932eda1fb
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
1b9a6a9599
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
90bfcef833
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
30d39b387d
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
a2602fc4f9
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
comfyanonymous
145e279e6c
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
4731c0b618
Code cleanups.
2023-08-24 19:39:18 -04:00
comfyanonymous
74d1dfb0ad
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
5dbbb2c93c
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
e340ef7852
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
5ef57a983b
Even with forced fp16 the cpu device should never use it.
2023-08-23 21:38:28 -04:00
comfyanonymous
1aff0360c3
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
ed16480867
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
comfyanonymous
b168bdf3e5
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
comfyanonymous
08af73e450
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
f29b9306fd
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
comfyanonymous
b982fd039e
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
819c4a42d3
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
comfyanonymous
37a6cb2649
Small cleanups.
2023-08-20 14:56:47 -04:00
Simon Lui
a670a3f848
Further tuning and fix mem_free_total.
2023-08-20 14:19:53 -04:00
Simon Lui
af8959c8a9
Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes.
2023-08-20 14:19:51 -04:00
comfyanonymous
56901bd7c6
--disable-smart-memory now disables loading model directly to vram.
2023-08-20 04:00:53 -04:00
comfyanonymous
225a5f9f1f
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
comfyanonymous
01a6f9b116
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
280659a6ee
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
398390a76f
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
e246c23708
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
Alexopus
a5a8d25943
Fix referenced before assignment
...
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
bcf55c1446
Fix issue with not freeing enough memory when sampling.
2023-08-17 15:59:56 -04:00
comfyanonymous
d8f9334347
Fix bug with lowvram and controlnet advanced node.
2023-08-17 13:38:51 -04:00
comfyanonymous
e075077ad8
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
comfyanonymous
21e07337ed
Add --disable-smart-memory for those that want the old behaviour.
2023-08-17 03:12:37 -04:00
comfyanonymous
197ab43811
Fix issue with regular torch version.
2023-08-17 01:58:54 -04:00
comfyanonymous
a216b56591
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
e4ffcf2c61
Support small diffusers controlnet so both types are now supported.
2023-08-16 12:45:56 -04:00
comfyanonymous
1a21a2271e
Support diffusers mini controlnets.
2023-08-16 12:28:01 -04:00
comfyanonymous
9e0c084148
Fix clip vision issue with old transformers versions.
2023-08-16 11:36:22 -04:00
comfyanonymous
dcbf839d22
Fix potential issue with batch size and clip vision.
2023-08-16 11:05:11 -04:00
comfyanonymous
601e4a9865
Refactor unclip code.
2023-08-14 23:48:47 -04:00
comfyanonymous
736e2e8f49
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
87f1037cf5
Remove 3m from PR #1213 because of some small issues.
2023-08-14 00:48:45 -04:00
comfyanonymous
c3910c4ffb
Add sgm_uniform scheduler that acts like the default one in sgm.
2023-08-14 00:29:03 -04:00
comfyanonymous
c5666c503b
Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras.
2023-08-14 00:28:50 -04:00
comfyanonymous
fdff76f667
Merge branch 'dpmpp3m' of https://github.com/FizzleDorf/ComfyUI
2023-08-14 00:23:15 -04:00
FizzleDorf
8b4773ee86
dpmpp 3m + dpmpp 3m sde added
2023-08-13 22:29:04 -04:00
comfyanonymous
c3df7d2861
Print unet config when model isn't detected.
2023-08-13 01:39:48 -04:00
comfyanonymous
a2d5028ad8
Support for yet another lora type based on diffusers.
2023-08-11 13:04:21 -04:00
comfyanonymous
64510bb4c2
Add --temp-directory argument to set temp directory.
2023-08-11 05:13:03 -04:00
comfyanonymous
c5a20b5c85
Support diffuser text encoder loras.
2023-08-10 20:28:28 -04:00
comfyanonymous
1b69a7e7ea
Disable calculating uncond when CFG is 1.0
2023-08-09 20:55:03 -04:00
comfyanonymous
84c0dd8247
Add argument to disable auto launching the browser.
2023-08-07 02:25:12 -04:00
comfyanonymous
531ea9c1aa
Detect hint_channels from controlnet.
2023-08-06 14:08:59 -04:00
comfyanonymous
18c180b727
Support loras in diffusers format.
2023-08-05 01:40:24 -04:00
comfyanonymous
ef16077917
Add CMP 30HX card to the nvidia_16_series list.
2023-08-04 12:08:45 -04:00
comfyanonymous
bbd5052ed0
Make sure the pooled output stays at the EOS token with added embeddings.
2023-08-03 20:27:50 -04:00
comfyanonymous
28401d83c5
Only shift text encoder to vram when CPU cores are under 8.
2023-07-31 00:08:54 -04:00
comfyanonymous
2ee42215be
Lower CPU thread check for running the text encoder on the CPU vs GPU.
2023-07-30 17:18:24 -04:00
comfyanonymous
9c5ad64310
Remove some useless code.
2023-07-30 14:13:33 -04:00
comfyanonymous
71ffc7b350
Faster VAE loading.
2023-07-29 16:28:30 -04:00
comfyanonymous
8a8d8c86d6
Initialize the unet directly on the target device.
2023-07-29 14:51:56 -04:00
comfyanonymous
6651febc61
Remove unused code and torchdiffeq dependency.
2023-07-28 21:32:27 -04:00
comfyanonymous
8088be4f2e
Add --disable-metadata argument to disable saving metadata in files.
2023-07-28 12:31:41 -04:00
comfyanonymous
7ffe966c0a
Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI
2023-07-27 16:13:48 -04:00
comfyanonymous
cb47a5674c
Remove some prints.
2023-07-27 16:12:43 -04:00
asagi4
a63268b4e9
Fix timestep ranges when batch_size > 1
2023-07-27 21:14:09 +03:00
comfyanonymous
f9001baa44
Fix diffusers VAE loading.
2023-07-26 18:26:39 -04:00