comfyanonymous
842ac7fb1b
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
01e37204ed
Make SDTokenizer class work with more types of tokenizers.
2023-11-06 01:09:18 -05:00
comfyanonymous
9533904e39
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
3ad424ff47
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
817a182bac
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
11df5713a0
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
36cc11edbd
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
145e279e6c
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
5dbbb2c93c
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
1aff0360c3
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
comfyanonymous
e7fc7fb557
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
comfyanonymous
bbd5052ed0
Make sure the pooled output stays at the EOS token with added embeddings.
2023-08-03 20:27:50 -04:00
comfyanonymous
f67f1c99b8
Fix CLIPSetLastLayer not reverting when removed.
2023-07-15 01:41:21 -04:00
comfyanonymous
28bf6d49da
Fix potential tensors being on different devices issues.
2023-07-12 19:29:27 -04:00
comfyanonymous
6e99974161
Support SDXL embedding format with 2 CLIP.
2023-07-10 10:34:59 -04:00
comfyanonymous
d3b3c94616
Fix bug with weights when prompt is long.
2023-07-06 02:43:40 -04:00
comfyanonymous
5ace1146c5
Lower latency by batching some text encoder inputs.
2023-07-01 15:07:39 -04:00
comfyanonymous
d5a7abe10d
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
comfyanonymous
e946dca0e1
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
comfyanonymous
7520fc3eac
Fix embeddings not working with --gpu-only
2023-06-29 20:43:06 -04:00
comfyanonymous
c4c2db6ead
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
comfyanonymous
08f1f7686c
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
comfyanonymous
282638b813
Add a --gpu-only argument to keep and run everything on the GPU.
...
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
comfyanonymous
16c535a309
Properly disable weight initialization in clip models.
2023-06-14 20:13:08 -04:00
comfyanonymous
3e75be0a92
Don't initialize clip weights to default values.
2023-06-14 12:47:36 -04:00
comfyanonymous
cbf4192f8d
Fix bug when embedding gets ignored because of mismatched size.
2023-06-08 23:48:14 -04:00
comfyanonymous
a6d2a9487d
Search recursively in subfolders for embeddings.
2023-05-05 01:28:48 -04:00
comfyanonymous
f089d4abc7
Some refactoring: from_tokens -> encode_from_tokens
2023-04-15 18:46:58 -04:00
comfyanonymous
1b821e4d57
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2023-04-15 14:16:50 -04:00
BlenderNeko
e550f2f84f
fixed improper padding
2023-04-15 19:38:21 +02:00
comfyanonymous
0ecef1b4e8
Safely load pickled embeds that don't load with weights_only=True.
2023-04-14 15:33:43 -04:00
BlenderNeko
47b2d342a8
ensure backwards compat with optional args
2023-04-14 21:16:55 +02:00
BlenderNeko
779bed1a43
align behavior with old tokenize function
2023-04-14 21:02:45 +02:00
comfyanonymous
4d8a84520f
Don't stop workflow if loading embedding fails.
2023-04-14 13:54:00 -04:00
BlenderNeko
02f7bf6cb8
add unique ID per word/embedding for tokenizer
2023-04-13 22:01:01 +02:00
comfyanonymous
76a6b372da
Ignore embeddings when sizes don't match and print a WARNING.
2023-04-04 11:49:29 -04:00
comfyanonymous
06c7a9b406
Support multiple paths for embeddings.
2023-03-18 03:08:43 -04:00
comfyanonymous
69df07177d
Support old pytorch.
2023-02-19 16:59:03 -05:00
comfyanonymous
a9207a2c8e
Support people putting commas after the embedding name in the prompt.
2023-02-19 02:50:48 -05:00
comfyanonymous
324273fff2
Fix embedding not working when on new line.
2023-02-09 14:12:02 -05:00
comfyanonymous
f73e57d881
Add support for textual inversion embedding for SD1.x CLIP.
2023-01-29 18:46:44 -05:00
comfyanonymous
220afe3310
Initial commit.
2023-01-16 22:37:14 -05:00