Commit Graph

284 Commits

Author SHA1 Message Date
comfyanonymous
ade1b2244e Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI 2023-04-29 15:05:18 -04:00
Jacob Segal
f7dd560777 Default to sampling entire image
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.

I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
   used to index into tensors.
2023-04-29 00:16:58 -07:00
comfyanonymous
806786ed1d Don't try to get vram from xpu or cuda when directml is enabled. 2023-04-29 00:28:48 -04:00
comfyanonymous
e7ae3bc44c You can now select the device index with: --directml id
Like this for example: --directml 1
2023-04-28 16:51:35 -04:00
comfyanonymous
c2afcad2a5 Basic torch_directml support. Use --directml to use it. 2023-04-28 14:28:57 -04:00
Jacob Segal
6bfd8b6b1a Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2023-04-27 20:03:27 -07:00
comfyanonymous
a9e0b4177d Add callback to sampler function.
Callback format is: callback(step, x0, x)
2023-04-27 04:38:44 -04:00
comfyanonymous
d95ef10342 Some fixes to the batch masks PR. 2023-04-25 01:12:40 -04:00
comfyanonymous
288c72fe9f Refactor more code to sample.py 2023-04-24 23:25:51 -04:00
comfyanonymous
112d4ab7d7 This is cleaner this way. 2023-04-24 22:45:35 -04:00
BlenderNeko
d556513e2a gligen tuple 2023-04-24 21:47:57 +02:00
pythongosssss
51a069805b Add progress to vae decode tiled 2023-04-24 11:55:44 +01:00
BlenderNeko
7db702ecd0 made sample functions more explicit 2023-04-24 12:53:10 +02:00
BlenderNeko
ac6686d523 add docstrings 2023-04-23 20:09:09 +02:00
BlenderNeko
1e9a5097e1 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-23 20:02:18 +02:00
BlenderNeko
3e6b963e46 refactor/split various bits of code for sampling 2023-04-23 20:02:08 +02:00
comfyanonymous
e6771d0986 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2023-04-23 12:35:25 -04:00
comfyanonymous
0d66023475 This makes pytorch2.0 attention perform a bit faster. 2023-04-22 14:30:39 -04:00
comfyanonymous
ed4d73d4fd Remove some useless code. 2023-04-20 23:58:25 -04:00
comfyanonymous
a86c06e8da Don't pass adm to model when it doesn't support it. 2023-04-19 21:11:38 -04:00
comfyanonymous
6c156642e4 Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
comfyanonymous
3fe1252a35 Add a way for nodes to set a custom CFG function. 2023-04-17 11:05:15 -04:00
comfyanonymous
4df70d0f62 Fix model_management import so it doesn't get executed twice. 2023-04-15 19:04:33 -04:00
comfyanonymous
f089d4abc7 Some refactoring: from_tokens -> encode_from_tokens 2023-04-15 18:46:58 -04:00
comfyanonymous
1b821e4d57 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2023-04-15 14:16:50 -04:00
BlenderNeko
e550f2f84f fixed improper padding 2023-04-15 19:38:21 +02:00
comfyanonymous
3b9a2f504d Move code to empty gpu cache to model_management.py 2023-04-15 11:19:07 -04:00
comfyanonymous
0ecef1b4e8 Safely load pickled embeds that don't load with weights_only=True. 2023-04-14 15:33:43 -04:00
BlenderNeko
47b2d342a8 ensure backwards compat with optional args 2023-04-14 21:16:55 +02:00
BlenderNeko
779bed1a43 align behavior with old tokenize function 2023-04-14 21:02:45 +02:00
comfyanonymous
4d8a84520f Don't stop workflow if loading embedding fails. 2023-04-14 13:54:00 -04:00
BlenderNeko
928cb6b2c7 split tokenizer from encoder 2023-04-13 22:06:50 +02:00
BlenderNeko
02f7bf6cb8 add unique ID per word/embedding for tokenizer 2023-04-13 22:01:01 +02:00
comfyanonymous
9c2869e012 Fix for new transformers version. 2023-04-09 15:55:21 -04:00
comfyanonymous
4861dbb2e2 Print xformers version and warning about 0.0.18 2023-04-09 01:31:47 -04:00
comfyanonymous
7f1c54adeb Clarify what --windows-standalone-build does. 2023-04-07 15:52:56 -04:00
comfyanonymous
88e5ccb415 Cleanup. 2023-04-07 02:31:46 -04:00
comfyanonymous
d4301d49d3 Fix loading SD1.5 diffusers checkpoint. 2023-04-07 01:30:33 -04:00
comfyanonymous
8aebe865f2 Merge branch 'master' of https://github.com/sALTaccount/ComfyUI 2023-04-07 01:03:43 -04:00
comfyanonymous
d35efcbcb2 Add a --force-fp32 argument to force fp32 for debugging. 2023-04-07 00:27:54 -04:00
comfyanonymous
55a48f27db Small refactor. 2023-04-06 23:53:54 -04:00
comfyanonymous
69bc45aac3 Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX 2023-04-06 23:45:29 -04:00
藍+85CD
3adedd52d3 Merge branch 'master' into ipex 2023-04-07 09:11:30 +08:00
EllangoK
b12efcd7b4 fixes lack of support for multi configs
also adds some metavars to argarse
2023-04-06 19:06:39 -04:00
comfyanonymous
f2ca0a15ee Rename the cors parameter to something more verbose. 2023-04-06 15:24:55 -04:00
EllangoK
405685ce89 makes cors a cli parameter 2023-04-06 15:06:22 -04:00
EllangoK
1992795f88 set listen flag to listen on all if specifed 2023-04-06 13:19:00 -04:00
藍+85CD
01c0951a73 Fix auto lowvram detection on CUDA 2023-04-06 15:44:05 +08:00
sALTaccount
671feba9e6 diffusers loader 2023-04-05 23:57:31 -07:00
藍+85CD
23fd4abad1 Use separate variables instead of vram_state 2023-04-06 14:24:47 +08:00
藍+85CD
772741f218 Import intel_extension_for_pytorch as ipex 2023-04-06 12:27:22 +08:00
EllangoK
cd00b46465 seperates out arg parser and imports args 2023-04-05 23:41:23 -04:00
藍+85CD
16f4d42aa0 Add basic XPU device support
closed #387
2023-04-05 21:22:14 +08:00
comfyanonymous
7597a5d83e Disable xformers in VAE when xformers == 0.0.18 2023-04-04 22:22:02 -04:00
comfyanonymous
76a6b372da Ignore embeddings when sizes don't match and print a WARNING. 2023-04-04 11:49:29 -04:00
comfyanonymous
d96fde8103 Remove print. 2023-04-03 22:58:54 -04:00
comfyanonymous
0d1c6a3934 Pull latest tomesd code from upstream. 2023-04-03 15:49:28 -04:00
comfyanonymous
0e06be56ad Add noise augmentation setting to unCLIPConditioning. 2023-04-03 13:50:29 -04:00
comfyanonymous
b55667284c Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2023-04-01 23:19:15 -04:00
comfyanonymous
3567c01bc3 This seems to give better quality in tome. 2023-03-31 18:36:18 -04:00
comfyanonymous
7da8d5f9f5 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2023-03-31 17:19:58 -04:00
comfyanonymous
bcdd11d687 Add a way to pass options to the transformers blocks. 2023-03-31 13:04:39 -04:00
comfyanonymous
4b174ce0b4 Fix noise mask not working with > 1 batch size on ksamplers. 2023-03-30 03:50:12 -04:00
comfyanonymous
7e2f4b0897 Split VAE decode batches depending on free memory. 2023-03-29 02:24:37 -04:00
comfyanonymous
7a5ba4593c Fix ddim_uniform crashing with 37 steps. 2023-03-28 16:29:35 -04:00
Francesco Yoshi Gobbo
be201f0511 code cleanup 2023-03-27 06:48:09 +02:00
Francesco Yoshi Gobbo
28e1209e7e no lowvram state if cpu only 2023-03-27 04:51:18 +02:00
comfyanonymous
184ea9fd54 Fix ddim for Mac: #264 2023-03-26 00:36:54 -04:00
comfyanonymous
089cd3adc1 I don't think controlnets were being handled correctly by MPS. 2023-03-24 14:33:16 -04:00
comfyanonymous
4f76d503c7 Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI 2023-03-24 13:56:43 -04:00
Yurii Mazurevich
1206cb3386 Fixed typo 2023-03-24 19:39:55 +02:00
comfyanonymous
5d8c210a1d Make ddim work with --cpu 2023-03-24 11:39:51 -04:00
Yurii Mazurevich
b9b8b1893d Removed unnecessary comment 2023-03-24 14:15:30 +02:00
Yurii Mazurevich
f500d638c6 Added MPS device support 2023-03-24 14:12:56 +02:00
comfyanonymous
121beab586 Support loha that use cp decomposition. 2023-03-23 04:32:25 -04:00
comfyanonymous
d15c9f2c3b Add loha support. 2023-03-23 03:40:12 -04:00
comfyanonymous
7cae5f5769 Try again with vae tiled decoding if regular fails because of OOM. 2023-03-22 14:49:00 -04:00
comfyanonymous
d8d65e7c9f Less seams in tiled outputs at the cost of more processing. 2023-03-22 03:29:09 -04:00
comfyanonymous
13ec5cd3c2 Try to improve VAEEncode memory usage a bit. 2023-03-22 02:45:18 -04:00
comfyanonymous
db20d0a9fe Add laptop quadro cards to fp32 list. 2023-03-21 16:57:35 -04:00
comfyanonymous
921f4e14a4 Add support for locon mid weights. 2023-03-21 14:51:51 -04:00
comfyanonymous
29728e1119 Try to fix a vram issue with controlnets. 2023-03-19 10:50:38 -04:00
comfyanonymous
508010d114 Fix area composition feathering not working properly. 2023-03-19 02:00:52 -04:00
comfyanonymous
06c7a9b406 Support multiple paths for embeddings. 2023-03-18 03:08:43 -04:00
comfyanonymous
d5bf8038c8 Merge T2IAdapterLoader and ControlNetLoader.
Workflows will be auto updated.
2023-03-17 18:17:59 -04:00
comfyanonymous
dc8b43e512 Make --cpu have priority over everything else. 2023-03-13 21:30:01 -04:00
comfyanonymous
889689c7b2 use half() on fp16 models loaded with config. 2023-03-13 21:12:48 -04:00
comfyanonymous
edba761868 Use half() function on model when loading in fp16. 2023-03-13 20:58:09 -04:00
comfyanonymous
5be28c4069 Remove omegaconf dependency and some ci changes. 2023-03-13 14:49:18 -04:00
comfyanonymous
1de721b33c Add pytorch attention support to VAE. 2023-03-13 12:45:54 -04:00
comfyanonymous
72b42ab260 --disable-xformers should not even try to import xformers. 2023-03-13 11:36:48 -04:00
comfyanonymous
7c95e1a03b Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2023-03-12 15:44:16 -04:00
comfyanonymous
d777b2f4a9 Add a VAEEncodeTiled node. 2023-03-11 15:28:15 -05:00
comfyanonymous
bbdc5924b4 Try to fix memory issue. 2023-03-11 15:15:13 -05:00
comfyanonymous
adabcc74fb Make tiled_scale work for downscaling. 2023-03-11 14:58:55 -05:00
comfyanonymous
77290f7110 Tiled upscaling with the upscale models. 2023-03-11 14:04:13 -05:00
comfyanonymous
66cca3659d Add locon support. 2023-03-09 21:41:24 -05:00
comfyanonymous
fa4dda2550 SD2.x controlnets now work. 2023-03-08 01:13:38 -05:00
comfyanonymous
341ebaabd8 Relative imports to test something. 2023-03-07 11:00:35 -05:00
edikius
148fe7b116 Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2023-03-06 11:41:40 -05:00
comfyanonymous
339b2533eb Fix clip_skip no longer being loaded from yaml file. 2023-03-06 11:34:02 -05:00
comfyanonymous
ca7e2e3827 Add --cpu to use the cpu for inference. 2023-03-06 10:50:50 -05:00
comfyanonymous
663fa6eafd Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2023-03-05 18:39:25 -05:00
comfyanonymous
f8f2ea3bb1 Make VAE use common function to get free memory. 2023-03-05 14:20:07 -05:00
comfyanonymous
ffc5c6707b Fix pytorch 2.0 cross attention not working. 2023-03-05 14:14:54 -05:00
comfyanonymous
7ee5e2d40e Add support for new colour T2I adapter model. 2023-03-03 19:13:07 -05:00
comfyanonymous
23be9a77b8 Update T2I adapter code to latest. 2023-03-03 18:46:49 -05:00
comfyanonymous
8141cd7f42 Fix issue. 2023-03-03 13:18:01 -05:00
comfyanonymous
c195dab61c Add a node to set CLIP skip.
Use a more simple way to detect if the model is -v prediction.
2023-03-03 13:04:36 -05:00
comfyanonymous
5608730809 To be really simple CheckpointLoaderSimple should pick the right type. 2023-03-03 11:07:10 -05:00
comfyanonymous
8a2699b47d New CheckpointLoaderSimple to load checkpoints without a config. 2023-03-03 03:37:35 -05:00
comfyanonymous
b2a7f1b32a Make some cross attention functions work on the CPU. 2023-03-03 03:27:33 -05:00
comfyanonymous
666a9e8604 Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
2023-03-02 17:01:20 -05:00
comfyanonymous
b59b82a73b Add a way to interrupt current processing in the backend. 2023-03-02 14:42:03 -05:00
comfyanonymous
151fed3dfb Hopefully fix a strange issue with xformers + lowvram. 2023-02-28 13:48:52 -05:00
comfyanonymous
a59bb36cb8 Try to improve memory issues with del. 2023-02-28 12:27:43 -05:00
comfyanonymous
1304a8f8ad Small adjustment. 2023-02-27 20:04:18 -05:00
comfyanonymous
bddbd4bdb0 Enable highvram automatically when vram >> ram 2023-02-27 19:57:39 -05:00
comfyanonymous
14c390c0c2 Remove sample_ from some sampler names.
Old workflows will still work.
2023-02-27 01:43:06 -05:00
comfyanonymous
7add9fe7b3 Preparing to add another function to load checkpoints. 2023-02-26 17:29:01 -05:00
comfyanonymous
84a4b1f283 Fix uni_pc sampler not working with 1 or 2 steps. 2023-02-26 04:01:01 -05:00
comfyanonymous
084f8084ed Fix multiple controlnets not working. 2023-02-25 22:12:22 -05:00
comfyanonymous
9f1ca0847b Fixed issue when batched image was used as a controlnet input. 2023-02-25 14:57:28 -05:00
comfyanonymous
7f31625158 Fix missing variable. 2023-02-25 12:19:03 -05:00
comfyanonymous
4a31343a3d Add a T2IAdapterLoader node to load T2I-Adapter models.
They are loaded as CONTROL_NET objects because they are similar.
2023-02-25 01:24:56 -05:00
comfyanonymous
5cb9c83936 Prepare for t2i adapter. 2023-02-24 23:36:17 -05:00
comfyanonymous
1dfbc27fb6 Remove some useless imports 2023-02-24 12:36:55 -05:00
comfyanonymous
a4778b6a31 Added an experimental VAEDecodeTiled.
This decodes the image with the VAE in tiles which should be faster and
use less vram.

It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2023-02-24 02:10:10 -05:00
comfyanonymous
f1b122ba02 Add a node to load diff controlnets. 2023-02-22 23:22:03 -05:00
comfyanonymous
2e62af0a30 Implement DDIM sampler. 2023-02-22 21:10:19 -05:00
comfyanonymous
a17765cf94 Uni_PC: make max denoise behave more like other samplers.
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2023-02-22 02:21:06 -05:00
comfyanonymous
2124888a6c Remove prints that are useless when xformers is enabled. 2023-02-21 22:16:13 -05:00
comfyanonymous
00ef1aabeb Add uni_pc bh2 variant. 2023-02-21 16:11:48 -05:00
comfyanonymous
9cd0189e7e ControlNetApply now stacks.
It can be used to apply multiple control nets at the same time.
2023-02-21 01:18:53 -05:00
comfyanonymous
69df07177d Support old pytorch. 2023-02-19 16:59:03 -05:00
comfyanonymous
a9207a2c8e Support people putting commas after the embedding name in the prompt. 2023-02-19 02:50:48 -05:00
comfyanonymous
0f13853bd2 Add: --highvram for when you want models to stay on the vram. 2023-02-17 21:27:02 -05:00
comfyanonymous
273a3ebc67 Fix an OOM issue. 2023-02-17 16:21:01 -05:00
comfyanonymous
7dd65e43dd Low vram mode for controlnets. 2023-02-17 15:48:16 -05:00
comfyanonymous
7010bf9e60 Use fp16 for fp16 control nets. 2023-02-17 15:31:38 -05:00
comfyanonymous
9999b1e198 Add a way to control controlnet strength. 2023-02-16 18:08:01 -05:00
comfyanonymous
b5b68268ee Add ControlNet support. 2023-02-16 10:38:08 -05:00
comfyanonymous
6d0f92bcb2 Use inpaint models the proper way by using VAEEncodeForInpaint. 2023-02-15 20:44:51 -05:00
comfyanonymous
886b269d42 Support for inpaint models. 2023-02-15 16:38:20 -05:00
comfyanonymous
ce4e2f2955 Add masks to samplers code for inpainting. 2023-02-15 13:16:38 -05:00
comfyanonymous
e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2023-02-13 12:29:21 -05:00
comfyanonymous
f542f248f1 Show the right amount of steps in the progress bar for uni_pc.
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
comfyanonymous
f10b8948c3 768-v support for uni_pc sampler. 2023-02-11 04:34:58 -05:00
comfyanonymous
ce0aeb109e Remove print. 2023-02-11 03:41:40 -05:00
comfyanonymous
5489d5af04 Add uni_pc sampler to KSampler* nodes. 2023-02-11 03:34:09 -05:00