Commit Graph

32 Commits

Author SHA1 Message Date
comfyanonymous
e7fc7fb557 Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
bbd5052ed0 Make sure the pooled output stays at the EOS token with added embeddings. 2023-08-03 20:27:50 -04:00
comfyanonymous
f67f1c99b8 Fix CLIPSetLastLayer not reverting when removed. 2023-07-15 01:41:21 -04:00
comfyanonymous
28bf6d49da Fix potential tensors being on different devices issues. 2023-07-12 19:29:27 -04:00
comfyanonymous
6e99974161 Support SDXL embedding format with 2 CLIP. 2023-07-10 10:34:59 -04:00
comfyanonymous
d3b3c94616 Fix bug with weights when prompt is long. 2023-07-06 02:43:40 -04:00
comfyanonymous
5ace1146c5 Lower latency by batching some text encoder inputs. 2023-07-01 15:07:39 -04:00
comfyanonymous
d5a7abe10d Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
e946dca0e1 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
7520fc3eac Fix embeddings not working with --gpu-only 2023-06-29 20:43:06 -04:00
comfyanonymous
c4c2db6ead Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
08f1f7686c Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
282638b813 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
16c535a309 Properly disable weight initialization in clip models. 2023-06-14 20:13:08 -04:00
comfyanonymous
3e75be0a92 Don't initialize clip weights to default values. 2023-06-14 12:47:36 -04:00
comfyanonymous
cbf4192f8d Fix bug when embedding gets ignored because of mismatched size. 2023-06-08 23:48:14 -04:00
comfyanonymous
a6d2a9487d Search recursively in subfolders for embeddings. 2023-05-05 01:28:48 -04:00
comfyanonymous
f089d4abc7 Some refactoring: from_tokens -> encode_from_tokens 2023-04-15 18:46:58 -04:00
comfyanonymous
1b821e4d57 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-15 14:16:50 -04:00
BlenderNeko
e550f2f84f fixed improper padding 2023-04-15 19:38:21 +02:00
comfyanonymous
0ecef1b4e8 Safely load pickled embeds that don't load with weights_only=True. 2023-04-14 15:33:43 -04:00
BlenderNeko
47b2d342a8 ensure backwards compat with optional args 2023-04-14 21:16:55 +02:00
BlenderNeko
779bed1a43 align behavior with old tokenize function 2023-04-14 21:02:45 +02:00
comfyanonymous
4d8a84520f Don't stop workflow if loading embedding fails. 2023-04-14 13:54:00 -04:00
BlenderNeko
02f7bf6cb8 add unique ID per word/embedding for tokenizer 2023-04-13 22:01:01 +02:00
comfyanonymous
76a6b372da Ignore embeddings when sizes don't match and print a WARNING. 2023-04-04 11:49:29 -04:00
comfyanonymous
06c7a9b406 Support multiple paths for embeddings. 2023-03-18 03:08:43 -04:00
comfyanonymous
69df07177d Support old pytorch. 2023-02-19 16:59:03 -05:00
comfyanonymous
a9207a2c8e Support people putting commas after the embedding name in the prompt. 2023-02-19 02:50:48 -05:00
comfyanonymous
324273fff2 Fix embedding not working when on new line. 2023-02-09 14:12:02 -05:00
comfyanonymous
f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2023-01-29 18:46:44 -05:00
comfyanonymous
220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00