Commit Graph

477 Commits

Author SHA1 Message Date
comfyanonymous
795f5b3163 Only do the cast on the device if the device supports it. 2023-09-20 17:52:41 -04:00
comfyanonymous
e67a083a9f Don't depend on torchvision. 2023-09-19 13:12:47 -04:00
MoonRide303
20d8e318c5 Added support for lanczos scaling 2023-09-19 10:40:38 +02:00
comfyanonymous
a6511d69b0 Do lora cast on GPU instead of CPU for higher performance. 2023-09-18 23:04:49 -04:00
comfyanonymous
cdbbeb584d Enable pytorch attention by default on xpu. 2023-09-17 04:09:19 -04:00
comfyanonymous
5d9b731cf9 Support models without previews. 2023-09-16 12:59:54 -04:00
comfyanonymous
6d227d6fe0 Add cond_or_uncond array to transformer_options so hooks can check what is
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
509dd5ca26 Add DDPM sampler. 2023-09-15 19:22:47 -04:00
comfyanonymous
f45f65b6a4 This isn't used anywhere. 2023-09-15 12:03:03 -04:00
comfyanonymous
11df5713a0 Support for text encoder models that need attention_mask. 2023-09-15 02:02:05 -04:00
comfyanonymous
7211e1dc26 Set last layer on SD2.x models uses the proper indexes now.
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.

TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
cac135d12f Don't run text encoders on xpu because there are issues. 2023-09-14 12:16:07 -04:00
comfyanonymous
a3e0950ffb Only parse command line args when main.py is called. 2023-09-13 11:38:20 -04:00
comfyanonymous
c8905151e3 Don't leave very large hidden states in the clip vision output. 2023-09-12 15:09:10 -04:00
comfyanonymous
36cc11edbd Fix issue where autocast fp32 CLIP gave different results from regular. 2023-09-11 21:49:56 -04:00
comfyanonymous
ccc2f830d7 Add ldm format support to UNETLoader. 2023-09-11 16:36:50 -04:00
comfyanonymous
4eef469698 Add a penultimate_hidden_states to the clip vision output. 2023-09-08 14:06:58 -04:00
comfyanonymous
c4c69f8cc3 Support diffusers format t2i adapters. 2023-09-08 11:36:51 -04:00
comfyanonymous
c40f363254 Allow cancelling of everything with a progress bar. 2023-09-07 23:37:03 -04:00
comfyanonymous
b58288668f Add a ConditioningSetAreaPercentage node. 2023-09-06 03:28:27 -04:00
comfyanonymous
ef0c0892f6 Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
comfyanonymous
2f03d19888 Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI 2023-09-04 00:43:11 -04:00
Simon Lui
f63ffd1bb1 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 2023-09-02 20:07:52 -07:00
comfyanonymous
809db6d52f Move some functions to utils.py 2023-09-02 22:33:37 -04:00
Simon Lui
1148c2dec7 Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2023-09-02 18:22:10 -07:00
comfyanonymous
9f5ce6652b Use common function to reshape batch to. 2023-09-02 03:42:49 -04:00
comfyanonymous
e68beb56e4 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00
comfyanonymous
b8f3570a1b Remove xformers related print. 2023-09-01 02:12:03 -04:00
comfyanonymous
39ca2da00c Fix controlnet bug. 2023-09-01 02:01:08 -04:00
comfyanonymous
a0578c5470 Fix controlnet issue. 2023-08-31 15:16:58 -04:00
comfyanonymous
61036397c8 It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous
845faf8e51 Clean up DiffusersLoader node. 2023-08-30 12:57:07 -04:00
Simon Lui
c902fd7505 Fix error message in model_patcher.py
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
2c2ba07ff2 Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous
0482f057e0 Support SDXL t2i adapters with 3 channel input. 2023-08-29 16:44:57 -04:00
comfyanonymous
21b72ff81b Move beta_schedule to model_config and allow disabling unet creation. 2023-08-29 14:22:53 -04:00
comfyanonymous
2ab478346d Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
comfyanonymous
5da23d7f05 No need to check filename extensions to detect shuffle controlnet. 2023-08-28 16:49:06 -04:00
comfyanonymous
e4957ff97e Put clip vision outputs on the CPU. 2023-08-28 16:26:11 -04:00
comfyanonymous
256eb57284 Load clipvision model to GPU for faster performance. 2023-08-28 15:29:27 -04:00
comfyanonymous
4fb6163a21 Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous
daae7db069 Implement loras with norm keys. 2023-08-28 11:20:06 -04:00
comfyanonymous
ae3f7060d8 Enable bf16-vae by default on ampere and up. 2023-08-27 23:06:19 -04:00
comfyanonymous
6932eda1fb Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous
1b9a6a9599 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous
90bfcef833 Fix lowvram model merging. 2023-08-26 11:52:07 -04:00
comfyanonymous
30d39b387d The new smart memory management makes this unnecessary. 2023-08-25 18:02:15 -04:00
comfyanonymous
a2602fc4f9 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00