Commit Graph

1382 Commits

Author SHA1 Message Date
comfyanonymous
39ca2da00c Fix controlnet bug. 2023-09-01 02:01:08 -04:00
comfyanonymous
a0578c5470 Fix controlnet issue. 2023-08-31 15:16:58 -04:00
comfyanonymous
778c290a67 Fix VAEDecodeTiled minimum. 2023-08-31 14:26:16 -04:00
comfyanonymous
61036397c8 It doesn't make sense for c_crossattn and c_concat to be lists. 2023-08-31 13:25:00 -04:00
comfyanonymous
60ad0ed5af Update litegraph with upstream: middle mouse dragging. 2023-08-31 02:39:34 -04:00
Ridan Vandenbergh
1212d1c377 Remove forced lowercase on embeddings endpoint 2023-08-30 20:48:55 +02:00
comfyanonymous
845faf8e51 Clean up DiffusersLoader node. 2023-08-30 12:57:07 -04:00
Simon Lui
c902fd7505 Fix error message in model_patcher.py
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
2c2ba07ff2 Fix "Load Checkpoint with config" node. 2023-08-29 23:58:32 -04:00
comfyanonymous
3ccc42c8d9 Use the GPU for the canny preprocessor when available. 2023-08-29 17:58:40 -04:00
comfyanonymous
f1d6719a51 Add node to convert a specific colour in an image to a mask. 2023-08-29 17:55:42 -04:00
comfyanonymous
0482f057e0 Support SDXL t2i adapters with 3 channel input. 2023-08-29 16:44:57 -04:00
comfyanonymous
21b72ff81b Move beta_schedule to model_config and allow disabling unet creation. 2023-08-29 14:22:53 -04:00
comfyanonymous
e0214644e9 Merge branch 'feat/mute_bypass_nodes_in_group' of https://github.com/M1kep/ComfyUI 2023-08-29 11:33:40 -04:00
comfyanonymous
8beb1d641c Merge branch 'preserve-pnginfo' of https://github.com/chrisgoringe/ComfyUI 2023-08-29 11:32:58 -04:00
comfyanonymous
2ab478346d Remove optimization that caused border. 2023-08-29 11:21:36 -04:00
Chris
85747c8593 check for text attr and save 2023-08-29 18:50:28 +10:00
Chris
e6f3ead6ed copy metadata into modified image 2023-08-29 18:50:28 +10:00
Michael Poutre
646a86c3a9 refactor(ui): Switch statement, and handle other modes in group actions 2023-08-29 00:24:31 -07:00
Michael Poutre
dcfa585baa feat(ui): Add node mode toggles to group context menu 2023-08-28 23:49:25 -07:00
comfyanonymous
c019939b9e Use the same units for tile size in VAEDecodeTiled and VAEEncodeTiled. 2023-08-29 01:51:35 -04:00
comfyanonymous
27f54c5481 Merge branch 'master' of https://github.com/bvhari/ComfyUI 2023-08-29 01:42:00 -04:00
comfyanonymous
5da23d7f05 No need to check filename extensions to detect shuffle controlnet. 2023-08-28 16:49:06 -04:00
comfyanonymous
e4957ff97e Put clip vision outputs on the CPU. 2023-08-28 16:26:11 -04:00
comfyanonymous
256eb57284 Load clipvision model to GPU for faster performance. 2023-08-28 15:29:27 -04:00
comfyanonymous
4fb6163a21 Text encoder should initially load on the offload_device not the regular. 2023-08-28 15:08:45 -04:00
comfyanonymous
201631e61d Move ModelPatcher to model_patcher.py 2023-08-28 14:51:31 -04:00
BVH
719f132e04 Reduce min tile size for encode 2023-08-28 22:39:09 +05:30
comfyanonymous
daae7db069 Implement loras with norm keys. 2023-08-28 11:20:06 -04:00
BVH
c5bd14012a Make tile size in Tiled VAE encode/decode user configurable 2023-08-28 19:57:22 +05:30
Dr.Lt.Data
62a6a05b23 support on prompt event handler (#765)
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2023-08-28 00:52:22 -04:00
comfyanonymous
ae3f7060d8 Enable bf16-vae by default on ampere and up. 2023-08-27 23:06:19 -04:00
comfyanonymous
6932eda1fb Fallback to slice attention if xformers doesn't support the operation. 2023-08-27 22:24:42 -04:00
comfyanonymous
1b9a6a9599 Make --bf16-vae work on torch 2.0 2023-08-27 21:33:53 -04:00
comfyanonymous
9c880848af Merge branch 'increase_client_max_size' of https://github.com/ramyma/ComfyUI 2023-08-27 13:12:39 -04:00
Dr.Lt.Data
9e14d8a1a6 fix: cannot disable dynamicPrompts (#1327)
* fix: cannot disable dynamicPrompts

* indent fix

---------

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2023-08-27 12:34:24 -04:00
ramyma
dff4463ecb Increase client_max_size to allow bigger request bodies 2023-08-26 19:48:20 +03:00
comfyanonymous
90bfcef833 Fix lowvram model merging. 2023-08-26 11:52:07 -04:00
comfyanonymous
30d39b387d The new smart memory management makes this unnecessary. 2023-08-25 18:02:15 -04:00
comfyanonymous
a2602fc4f9 Move controlnet code to comfy/controlnet.py 2023-08-25 17:33:04 -04:00
comfyanonymous
7fb8cbb5ac Move lora code to comfy/lora.py 2023-08-25 17:11:51 -04:00
comfyanonymous
145e279e6c Move text_projection to base clip model. 2023-08-24 23:43:48 -04:00
comfyanonymous
4731c0b618 Code cleanups. 2023-08-24 19:39:18 -04:00
comfyanonymous
74d1dfb0ad Try to free enough vram for control lora inference. 2023-08-24 17:20:54 -04:00
comfyanonymous
5dbbb2c93c Fix potential issue with text projection matrix multiplication. 2023-08-24 00:54:16 -04:00
comfyanonymous
e340ef7852 Always shift text encoder to GPU when the device supports fp16. 2023-08-23 21:45:00 -04:00
comfyanonymous
5ef57a983b Even with forced fp16 the cpu device should never use it. 2023-08-23 21:38:28 -04:00
comfyanonymous
1aff0360c3 Initialize text encoder to target dtype. 2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557 Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
e26b2b6fd3 Don't hardcode node names for image upload widget. 2023-08-22 19:41:49 -04:00