Commit Graph

173 Commits

Author SHA1 Message Date
BlenderNeko
d556513e2a gligen tuple 2023-04-24 21:47:57 +02:00
BlenderNeko
7db702ecd0 made sample functions more explicit 2023-04-24 12:53:10 +02:00
BlenderNeko
ac6686d523 add docstrings 2023-04-23 20:09:09 +02:00
BlenderNeko
1e9a5097e1 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-23 20:02:18 +02:00
BlenderNeko
3e6b963e46 refactor/split various bits of code for sampling 2023-04-23 20:02:08 +02:00
comfyanonymous
e6771d0986 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
0d66023475 This makes pytorch2.0 attention perform a bit faster. 2023-04-22 14:30:39 -04:00
comfyanonymous
ed4d73d4fd Remove some useless code. 2023-04-20 23:58:25 -04:00
comfyanonymous
a86c06e8da Don't pass adm to model when it doesn't support it. 2023-04-19 21:11:38 -04:00
comfyanonymous
6c156642e4 Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous
3fe1252a35 Add a way for nodes to set a custom CFG function. 2023-04-17 11:05:15 -04:00
comfyanonymous
4df70d0f62 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous
f089d4abc7 Some refactoring: from_tokens -> encode_from_tokens 2023-04-15 18:46:58 -04:00
comfyanonymous
1b821e4d57 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-15 14:16:50 -04:00
BlenderNeko
e550f2f84f fixed improper padding 2023-04-15 19:38:21 +02:00
comfyanonymous
3b9a2f504d Move code to empty gpu cache to model_management.py 2023-04-15 11:19:07 -04:00
comfyanonymous
0ecef1b4e8 Safely load pickled embeds that don't load with weights_only=True. 2023-04-14 15:33:43 -04:00
BlenderNeko
47b2d342a8 ensure backwards compat with optional args 2023-04-14 21:16:55 +02:00
BlenderNeko
779bed1a43 align behavior with old tokenize function 2023-04-14 21:02:45 +02:00
comfyanonymous
4d8a84520f Don't stop workflow if loading embedding fails. 2023-04-14 13:54:00 -04:00
BlenderNeko
928cb6b2c7 split tokenizer from encoder 2023-04-13 22:06:50 +02:00
BlenderNeko
02f7bf6cb8 add unique ID per word/embedding for tokenizer 2023-04-13 22:01:01 +02:00
comfyanonymous
9c2869e012 Fix for new transformers version. 2023-04-09 15:55:21 -04:00
comfyanonymous
4861dbb2e2 Print xformers version and warning about 0.0.18 2023-04-09 01:31:47 -04:00
comfyanonymous
7f1c54adeb Clarify what --windows-standalone-build does. 2023-04-07 15:52:56 -04:00
comfyanonymous
88e5ccb415 Cleanup. 2023-04-07 02:31:46 -04:00
comfyanonymous
d4301d49d3 Fix loading SD1.5 diffusers checkpoint. 2023-04-07 01:30:33 -04:00
comfyanonymous
8aebe865f2 Merge branch 'master' of https://github.com/sALTaccount/ComfyUI 2023-04-07 01:03:43 -04:00
comfyanonymous
d35efcbcb2 Add a --force-fp32 argument to force fp32 for debugging. 2023-04-07 00:27:54 -04:00
comfyanonymous
55a48f27db Small refactor. 2023-04-06 23:53:54 -04:00
comfyanonymous
69bc45aac3 Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX 2023-04-06 23:45:29 -04:00
藍+85CD
3adedd52d3 Merge branch 'master' into ipex 2023-04-07 09:11:30 +08:00
EllangoK
b12efcd7b4 fixes lack of support for multi configs
also adds some metavars to argarse
2023-04-06 19:06:39 -04:00
comfyanonymous
f2ca0a15ee Rename the cors parameter to something more verbose. 2023-04-06 15:24:55 -04:00
EllangoK
405685ce89 makes cors a cli parameter 2023-04-06 15:06:22 -04:00
EllangoK
1992795f88 set listen flag to listen on all if specifed 2023-04-06 13:19:00 -04:00
藍+85CD
01c0951a73 Fix auto lowvram detection on CUDA 2023-04-06 15:44:05 +08:00
sALTaccount
671feba9e6 diffusers loader 2023-04-05 23:57:31 -07:00
藍+85CD
23fd4abad1 Use separate variables instead of vram_state 2023-04-06 14:24:47 +08:00
藍+85CD
772741f218 Import intel_extension_for_pytorch as ipex 2023-04-06 12:27:22 +08:00
EllangoK
cd00b46465 seperates out arg parser and imports args 2023-04-05 23:41:23 -04:00
藍+85CD
16f4d42aa0 Add basic XPU device support
closed #387
2023-04-05 21:22:14 +08:00
comfyanonymous
7597a5d83e Disable xformers in VAE when xformers == 0.0.18 2023-04-04 22:22:02 -04:00
comfyanonymous
76a6b372da Ignore embeddings when sizes don't match and print a WARNING. 2023-04-04 11:49:29 -04:00
comfyanonymous
d96fde8103 Remove print. 2023-04-03 22:58:54 -04:00
comfyanonymous
0d1c6a3934 Pull latest tomesd code from upstream. 2023-04-03 15:49:28 -04:00
comfyanonymous
0e06be56ad Add noise augmentation setting to unCLIPConditioning. 2023-04-03 13:50:29 -04:00
comfyanonymous
b55667284c Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
3567c01bc3 This seems to give better quality in tome. 2023-03-31 18:36:18 -04:00
comfyanonymous
7da8d5f9f5 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00