Commit Graph

550 Commits

Author SHA1 Message Date
comfyanonymous
c60864b5e4 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
e095eadda5 Let unet wrapper functions have .to attributes. 2023-10-11 01:34:38 -04:00
comfyanonymous
9c8d53db6c Cleanup. 2023-10-10 21:46:53 -04:00
comfyanonymous
fc126a3d33 Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI 2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
74dc9ecc66 Supports TAESD models in safetensors format 2023-10-10 13:21:44 +09:00
comfyanonymous
58c5545dcf Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI 2023-10-09 01:53:29 -04:00
comfyanonymous
4456b4d137 load_checkpoint_guess_config can now optionally output the model. 2023-10-06 13:48:18 -04:00
Jairo Correa
26a1afbcfe Option to input directory 2023-10-04 19:45:15 -03:00
City
ee2d8e46e1 Fix quality loss due to low precision 2023-10-04 15:40:59 +02:00
badayvedat
d4aa26b684 fix: typo in extra sampler 2023-09-29 06:09:59 +03:00
comfyanonymous
eeadcff352 Add SamplerDPMPP_2M_SDE node. 2023-09-28 21:56:23 -04:00
comfyanonymous
c59ff7dc9b Print missing VAE keys. 2023-09-28 00:54:57 -04:00
comfyanonymous
97bd301d8f Add missing samplers to KSamplerSelect. 2023-09-28 00:17:03 -04:00
comfyanonymous
75a26ed5ee Add a SamplerCustom Node.
This node takes a list of sigmas and a sampler object as input.

This lets people easily implement custom schedulers and samplers as nodes.

More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
de1158883e Refactor sampling related code. 2023-09-27 16:45:22 -04:00
comfyanonymous
23ed2b1654 Model patches can now know which batch is positive and negative. 2023-09-27 12:04:07 -04:00
comfyanonymous
67a9216de5 Scheduler code refactor. 2023-09-26 17:07:07 -04:00
comfyanonymous
cbeebba747 Sampling code refactor. 2023-09-26 13:45:15 -04:00
comfyanonymous
b1650c89ce Support more controlnet models. 2023-09-23 18:47:46 -04:00
comfyanonymous
68569d26df Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI 2023-09-23 00:57:17 -04:00
Simon Lui
47164eb065 Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 2023-09-22 21:11:27 -07:00
comfyanonymous
a990441ba0 Add a way to set output block patches to modify the h and hsp. 2023-09-22 20:26:47 -04:00
comfyanonymous
02f4208e1f Allow having a different pooled output for each image in a batch. 2023-09-21 01:14:42 -04:00
comfyanonymous
795f5b3163 Only do the cast on the device if the device supports it. 2023-09-20 17:52:41 -04:00
comfyanonymous
e67a083a9f Don't depend on torchvision. 2023-09-19 13:12:47 -04:00
MoonRide303
20d8e318c5 Added support for lanczos scaling 2023-09-19 10:40:38 +02:00
comfyanonymous
a6511d69b0 Do lora cast on GPU instead of CPU for higher performance. 2023-09-18 23:04:49 -04:00
comfyanonymous
cdbbeb584d Enable pytorch attention by default on xpu. 2023-09-17 04:09:19 -04:00
comfyanonymous
5d9b731cf9 Support models without previews. 2023-09-16 12:59:54 -04:00
comfyanonymous
6d227d6fe0 Add cond_or_uncond array to transformer_options so hooks can check what is
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
509dd5ca26 Add DDPM sampler. 2023-09-15 19:22:47 -04:00
comfyanonymous
f45f65b6a4 This isn't used anywhere. 2023-09-15 12:03:03 -04:00
comfyanonymous
11df5713a0 Support for text encoder models that need attention_mask. 2023-09-15 02:02:05 -04:00
comfyanonymous
7211e1dc26 Set last layer on SD2.x models uses the proper indexes now.
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.

TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
cac135d12f Don't run text encoders on xpu because there are issues. 2023-09-14 12:16:07 -04:00
comfyanonymous
a3e0950ffb Only parse command line args when main.py is called. 2023-09-13 11:38:20 -04:00
comfyanonymous
c8905151e3 Don't leave very large hidden states in the clip vision output. 2023-09-12 15:09:10 -04:00
comfyanonymous
36cc11edbd Fix issue where autocast fp32 CLIP gave different results from regular. 2023-09-11 21:49:56 -04:00
comfyanonymous
ccc2f830d7 Add ldm format support to UNETLoader. 2023-09-11 16:36:50 -04:00
comfyanonymous
4eef469698 Add a penultimate_hidden_states to the clip vision output. 2023-09-08 14:06:58 -04:00
comfyanonymous
c4c69f8cc3 Support diffusers format t2i adapters. 2023-09-08 11:36:51 -04:00
comfyanonymous
c40f363254 Allow cancelling of everything with a progress bar. 2023-09-07 23:37:03 -04:00
comfyanonymous
b58288668f Add a ConditioningSetAreaPercentage node. 2023-09-06 03:28:27 -04:00
comfyanonymous
ef0c0892f6 Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
comfyanonymous
2f03d19888 Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI 2023-09-04 00:43:11 -04:00
Simon Lui
f63ffd1bb1 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 2023-09-02 20:07:52 -07:00
comfyanonymous
809db6d52f Move some functions to utils.py 2023-09-02 22:33:37 -04:00
Simon Lui
1148c2dec7 Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2023-09-02 18:22:10 -07:00
comfyanonymous
9f5ce6652b Use common function to reshape batch to. 2023-09-02 03:42:49 -04:00
comfyanonymous
e68beb56e4 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00
comfyanonymous
b8f3570a1b Remove xformers related print. 2023-09-01 02:12:03 -04:00
comfyanonymous
39ca2da00c Fix controlnet bug. 2023-09-01 02:01:08 -04:00
comfyanonymous
a0578c5470 Fix controlnet issue. 2023-08-31 15:16:58 -04:00
comfyanonymous
61036397c8 It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous
845faf8e51 Clean up DiffusersLoader node. 2023-08-30 12:57:07 -04:00
Simon Lui
c902fd7505 Fix error message in model_patcher.py
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
2c2ba07ff2 Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous
0482f057e0 Support SDXL t2i adapters with 3 channel input. 2023-08-29 16:44:57 -04:00
comfyanonymous
21b72ff81b Move beta_schedule to model_config and allow disabling unet creation. 2023-08-29 14:22:53 -04:00
comfyanonymous
2ab478346d Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
comfyanonymous
5da23d7f05 No need to check filename extensions to detect shuffle controlnet. 2023-08-28 16:49:06 -04:00
comfyanonymous
e4957ff97e Put clip vision outputs on the CPU. 2023-08-28 16:26:11 -04:00
comfyanonymous
256eb57284 Load clipvision model to GPU for faster performance. 2023-08-28 15:29:27 -04:00
comfyanonymous
4fb6163a21 Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous
daae7db069 Implement loras with norm keys. 2023-08-28 11:20:06 -04:00
comfyanonymous
ae3f7060d8 Enable bf16-vae by default on ampere and up. 2023-08-27 23:06:19 -04:00
comfyanonymous
6932eda1fb Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous
1b9a6a9599 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous
90bfcef833 Fix lowvram model merging. 2023-08-26 11:52:07 -04:00
comfyanonymous
30d39b387d The new smart memory management makes this unnecessary. 2023-08-25 18:02:15 -04:00
comfyanonymous
a2602fc4f9 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous
145e279e6c Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous
4731c0b618 Code cleanups. 2023-08-24 19:39:18 -04:00
comfyanonymous
74d1dfb0ad Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous
5dbbb2c93c Fix potential issue with text projection matrix multiplication. 2023-08-24 00:54:16 -04:00
comfyanonymous
e340ef7852 Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous
5ef57a983b Even with forced fp16 the cpu device should never use it. 2023-08-23 21:38:28 -04:00
comfyanonymous
1aff0360c3 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557 Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
ed16480867 All resolutions now work with t2i adapter for SDXL. 2023-08-22 16:23:54 -04:00
comfyanonymous
b168bdf3e5 T2I adapter SDXL. 2023-08-22 14:40:43 -04:00
comfyanonymous
08af73e450 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous
f29b9306fd Fix control lora not working in fp32. 2023-08-21 20:38:31 -04:00
comfyanonymous
b982fd039e Fix ControlLora on lowvram. 2023-08-21 00:54:04 -04:00
comfyanonymous
819c4a42d3 Remove autocast from controlnet code. 2023-08-20 21:47:32 -04:00
comfyanonymous
37a6cb2649 Small cleanups. 2023-08-20 14:56:47 -04:00
Simon Lui
a670a3f848 Further tuning and fix mem_free_total. 2023-08-20 14:19:53 -04:00
Simon Lui
af8959c8a9 Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes. 2023-08-20 14:19:51 -04:00
comfyanonymous
56901bd7c6 --disable-smart-memory now disables loading model directly to vram. 2023-08-20 04:00:53 -04:00
comfyanonymous
225a5f9f1f Free more memory before VAE encode/decode. 2023-08-19 12:13:13 -04:00
comfyanonymous
01a6f9b116 Fix issue with gligen. 2023-08-18 16:32:23 -04:00
comfyanonymous
280659a6ee Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
398390a76f ReVision support: unclip nodes can now be used with SDXL. 2023-08-18 11:59:36 -04:00
comfyanonymous
e246c23708 Add support for clip g vision model to CLIPVisionLoader. 2023-08-18 11:13:29 -04:00
Alexopus
a5a8d25943 Fix referenced before assignment
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
bcf55c1446 Fix issue with not freeing enough memory when sampling. 2023-08-17 15:59:56 -04:00
comfyanonymous
d8f9334347 Fix bug with lowvram and controlnet advanced node. 2023-08-17 13:38:51 -04:00
comfyanonymous
e075077ad8 Fix potential issues with patching models when saving checkpoints. 2023-08-17 11:07:08 -04:00