Commit Graph

459 Commits

Author SHA1 Message Date
comfyanonymous
c40f363254 Allow cancelling of everything with a progress bar. 2023-09-07 23:37:03 -04:00
comfyanonymous
b58288668f Add a ConditioningSetAreaPercentage node. 2023-09-06 03:28:27 -04:00
comfyanonymous
ef0c0892f6 Add a force argument to soft_empty_cache to force a cache empty. 2023-09-04 00:58:18 -04:00
comfyanonymous
2f03d19888 Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI 2023-09-04 00:43:11 -04:00
Simon Lui
f63ffd1bb1 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 2023-09-02 20:07:52 -07:00
comfyanonymous
809db6d52f Move some functions to utils.py 2023-09-02 22:33:37 -04:00
Simon Lui
1148c2dec7 Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2023-09-02 18:22:10 -07:00
comfyanonymous
9f5ce6652b Use common function to reshape batch to. 2023-09-02 03:42:49 -04:00
comfyanonymous
e68beb56e4 Support SDXL inpaint models. 2023-09-01 15:22:52 -04:00
comfyanonymous
b8f3570a1b Remove xformers related print. 2023-09-01 02:12:03 -04:00
comfyanonymous
39ca2da00c Fix controlnet bug. 2023-09-01 02:01:08 -04:00
comfyanonymous
a0578c5470 Fix controlnet issue. 2023-08-31 15:16:58 -04:00
comfyanonymous
61036397c8 It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous
845faf8e51 Clean up DiffusersLoader node. 2023-08-30 12:57:07 -04:00
Simon Lui
c902fd7505 Fix error message in model_patcher.py
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
2c2ba07ff2 Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous
0482f057e0 Support SDXL t2i adapters with 3 channel input. 2023-08-29 16:44:57 -04:00
comfyanonymous
21b72ff81b Move beta_schedule to model_config and allow disabling unet creation. 2023-08-29 14:22:53 -04:00
comfyanonymous
2ab478346d Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
comfyanonymous
5da23d7f05 No need to check filename extensions to detect shuffle controlnet. 2023-08-28 16:49:06 -04:00
comfyanonymous
e4957ff97e Put clip vision outputs on the CPU. 2023-08-28 16:26:11 -04:00
comfyanonymous
256eb57284 Load clipvision model to GPU for faster performance. 2023-08-28 15:29:27 -04:00
comfyanonymous
4fb6163a21 Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
comfyanonymous
daae7db069 Implement loras with norm keys. 2023-08-28 11:20:06 -04:00
comfyanonymous
ae3f7060d8 Enable bf16-vae by default on ampere and up. 2023-08-27 23:06:19 -04:00
comfyanonymous
6932eda1fb Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous
1b9a6a9599 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous
90bfcef833 Fix lowvram model merging. 2023-08-26 11:52:07 -04:00
comfyanonymous
30d39b387d The new smart memory management makes this unnecessary. 2023-08-25 18:02:15 -04:00
comfyanonymous
a2602fc4f9 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous
145e279e6c Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous
4731c0b618 Code cleanups. 2023-08-24 19:39:18 -04:00
comfyanonymous
74d1dfb0ad Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous
5dbbb2c93c Fix potential issue with text projection matrix multiplication. 2023-08-24 00:54:16 -04:00
comfyanonymous
e340ef7852 Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous
5ef57a983b Even with forced fp16 the cpu device should never use it. 2023-08-23 21:38:28 -04:00
comfyanonymous
1aff0360c3 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557 Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
ed16480867 All resolutions now work with t2i adapter for SDXL. 2023-08-22 16:23:54 -04:00
comfyanonymous
b168bdf3e5 T2I adapter SDXL. 2023-08-22 14:40:43 -04:00
comfyanonymous
08af73e450 Controlnet/t2iadapter cleanup. 2023-08-22 01:06:26 -04:00
comfyanonymous
f29b9306fd Fix control lora not working in fp32. 2023-08-21 20:38:31 -04:00
comfyanonymous
b982fd039e Fix ControlLora on lowvram. 2023-08-21 00:54:04 -04:00
comfyanonymous
819c4a42d3 Remove autocast from controlnet code. 2023-08-20 21:47:32 -04:00
comfyanonymous
37a6cb2649 Small cleanups. 2023-08-20 14:56:47 -04:00
Simon Lui
a670a3f848 Further tuning and fix mem_free_total. 2023-08-20 14:19:53 -04:00
Simon Lui
af8959c8a9 Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes. 2023-08-20 14:19:51 -04:00
comfyanonymous
56901bd7c6 --disable-smart-memory now disables loading model directly to vram. 2023-08-20 04:00:53 -04:00