Commit Graph

613 Commits

Author SHA1 Message Date
comfyanonymous
56fac7fde1 sampling_function now has the model object as the argument. 2023-11-12 03:45:10 -05:00
comfyanonymous
d4b8742089 Remove useless argument from uni_pc sampler. 2023-11-12 01:25:33 -05:00
comfyanonymous
3520471621 Fix bug. 2023-11-11 12:20:16 -05:00
comfyanonymous
b8824accfa Add option to use in place weight updating in ModelPatcher. 2023-11-11 01:11:12 -05:00
comfyanonymous
c3caf2b8aa Refactor. 2023-11-11 01:11:06 -05:00
comfyanonymous
5b4cacf352 Working RescaleCFG node.
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous
67744beb00 Fix model merge bug.
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous
43dcfcd754 Support lcm models.
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous
591d338ce3 Add support for full diff lora keys. 2023-11-08 22:05:31 -05:00
comfyanonymous
248d1111de Add a CONDConstant for passing non tensor conds to unet. 2023-11-08 01:59:09 -05:00
comfyanonymous
0d52957c1d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous
6e99d21369 Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous
7b2806fee6 Fix issue with object patches not being copied with patcher. 2023-11-07 22:15:15 -05:00
comfyanonymous
cb22d11fc8 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous
397acd549a Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous
4d21372152 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
842ac7fb1b CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
01e37204ed Make SDTokenizer class work with more types of tokenizers. 2023-11-06 01:09:18 -05:00
gameltb
b7d50f3d80 fix unet_wrapper_function name in ModelPatcher 2023-11-05 17:11:44 +08:00
comfyanonymous
4aac40b213 Move model sampling code to comfy/model_sampling.py 2023-11-04 01:32:23 -04:00
comfyanonymous
f019f896c6 Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
f10036cbf7 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
cdfb16b654 Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous
050c96acdf Allow ModelSamplingDiscrete to be instantiated without a model config. 2023-11-01 19:13:03 -04:00
comfyanonymous
ff096742bd Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous
be85c3408b Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous
169b76657b Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous
11e03afad6 Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
ab0cfba6d1 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
83a79be597 Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous
8eae4c0adb Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00
comfyanonymous
d5caedbba2 Add --max-upload-size argument, the default is 100MB. 2023-10-29 03:55:46 -04:00
comfyanonymous
9533904e39 Fix checkpoint loader with config. 2023-10-27 22:13:55 -04:00
comfyanonymous
3ad424ff47 SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 2023-10-27 15:54:04 -04:00
comfyanonymous
d4bc91d58f Support SSD1B model and make it easier to support asymmetric unets. 2023-10-27 14:45:15 -04:00
comfyanonymous
817a182bac Restrict loading embeddings from embedding folders. 2023-10-27 02:54:13 -04:00
comfyanonymous
fbdca5341d Faster clip image processing. 2023-10-26 01:53:01 -04:00
comfyanonymous
f083f6b663 Fix some OOM issues with split and sub quad attention. 2023-10-25 20:17:28 -04:00
comfyanonymous
62f16ae274 Fix uni_pc returning noisy image when steps <= 3 2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
95f137a819 change 'c_adm' to 'y' in ControlNet.get_control 2023-10-25 08:24:32 -05:00
comfyanonymous
62a4b04e7f Pass extra conds directly to unet. 2023-10-25 00:07:53 -04:00
comfyanonymous
141c4ffcba Refactor to make it easier to add custom conds to models. 2023-10-24 23:31:12 -04:00
comfyanonymous
5ee8c5fafc Sampling code refactor to make it easier to add more conds. 2023-10-24 03:38:41 -04:00
comfyanonymous
7f861d49fd Empty the cache when torch cache is more than 25% free mem. 2023-10-22 13:58:12 -04:00
comfyanonymous
c73b5fab20 attention_basic now works with hypertile. 2023-10-22 03:59:53 -04:00
comfyanonymous
57381b0892 Make sub_quad and split work with hypertile. 2023-10-22 03:51:29 -04:00
comfyanonymous
4acad03054 Fix t2i adapter issue. 2023-10-21 20:31:24 -04:00
comfyanonymous
5a9a1a50af Make xformers work with hypertile. 2023-10-21 13:23:03 -04:00
comfyanonymous
370c837794 Fix uni_pc sampler math. This changes the images this sampler produces. 2023-10-20 04:16:53 -04:00
comfyanonymous
f18406d838 Make sure cond_concat is on the right device. 2023-10-19 01:14:25 -04:00
comfyanonymous
516c334a26 Refactor cond_concat into conditioning. 2023-10-18 20:36:58 -04:00
comfyanonymous
23abd3ec84 Fix some potential issues. 2023-10-18 19:48:36 -04:00
comfyanonymous
5cf44c22ad Refactor cond_concat into model object. 2023-10-18 16:48:37 -04:00
comfyanonymous
1b4650e307 Fix memory issue related to control loras.
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous
459787f78f Make VAE code closer to sgm. 2023-10-17 15:18:51 -04:00
comfyanonymous
28b98a96d3 Refactor the attention stuff in the VAE. 2023-10-17 03:19:29 -04:00
comfyanonymous
daabf7fd3a Add some Quadro cards to the list of cards with broken fp16. 2023-10-16 16:48:46 -04:00
comfyanonymous
bffd427388 Add a separate optimized_attention_masked function. 2023-10-16 02:31:24 -04:00
comfyanonymous
db653f4908 Add a --bf16-unet to test running the unet in bf16. 2023-10-13 14:51:10 -04:00
comfyanonymous
728139a5b9 Refactor code so model can be a dtype other than fp32 or fp16. 2023-10-13 14:41:17 -04:00
comfyanonymous
494ddf7717 pytorch_attention_enabled can now return True when xformers is enabled. 2023-10-11 21:30:57 -04:00
comfyanonymous
18e4504de7 Pull some small changes from the other repo. 2023-10-11 20:38:48 -04:00
comfyanonymous
95df4b6174 Allow attn_mask in attention_pytorch. 2023-10-11 20:38:48 -04:00
comfyanonymous
c60864b5e4 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
e095eadda5 Let unet wrapper functions have .to attributes. 2023-10-11 01:34:38 -04:00
comfyanonymous
9c8d53db6c Cleanup. 2023-10-10 21:46:53 -04:00
comfyanonymous
fc126a3d33 Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI 2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
74dc9ecc66 Supports TAESD models in safetensors format 2023-10-10 13:21:44 +09:00
comfyanonymous
58c5545dcf Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI 2023-10-09 01:53:29 -04:00
comfyanonymous
4456b4d137 load_checkpoint_guess_config can now optionally output the model. 2023-10-06 13:48:18 -04:00
Jairo Correa
26a1afbcfe Option to input directory 2023-10-04 19:45:15 -03:00
City
ee2d8e46e1 Fix quality loss due to low precision 2023-10-04 15:40:59 +02:00
badayvedat
d4aa26b684 fix: typo in extra sampler 2023-09-29 06:09:59 +03:00
comfyanonymous
eeadcff352 Add SamplerDPMPP_2M_SDE node. 2023-09-28 21:56:23 -04:00
comfyanonymous
c59ff7dc9b Print missing VAE keys. 2023-09-28 00:54:57 -04:00
comfyanonymous
97bd301d8f Add missing samplers to KSamplerSelect. 2023-09-28 00:17:03 -04:00
comfyanonymous
75a26ed5ee Add a SamplerCustom Node.
This node takes a list of sigmas and a sampler object as input.

This lets people easily implement custom schedulers and samplers as nodes.

More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
de1158883e Refactor sampling related code. 2023-09-27 16:45:22 -04:00
comfyanonymous
23ed2b1654 Model patches can now know which batch is positive and negative. 2023-09-27 12:04:07 -04:00
comfyanonymous
67a9216de5 Scheduler code refactor. 2023-09-26 17:07:07 -04:00
comfyanonymous
cbeebba747 Sampling code refactor. 2023-09-26 13:45:15 -04:00
comfyanonymous
b1650c89ce Support more controlnet models. 2023-09-23 18:47:46 -04:00
comfyanonymous
68569d26df Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI 2023-09-23 00:57:17 -04:00
Simon Lui
47164eb065 Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 2023-09-22 21:11:27 -07:00
comfyanonymous
a990441ba0 Add a way to set output block patches to modify the h and hsp. 2023-09-22 20:26:47 -04:00
comfyanonymous
02f4208e1f Allow having a different pooled output for each image in a batch. 2023-09-21 01:14:42 -04:00
comfyanonymous
795f5b3163 Only do the cast on the device if the device supports it. 2023-09-20 17:52:41 -04:00
comfyanonymous
e67a083a9f Don't depend on torchvision. 2023-09-19 13:12:47 -04:00
MoonRide303
20d8e318c5 Added support for lanczos scaling 2023-09-19 10:40:38 +02:00
comfyanonymous
a6511d69b0 Do lora cast on GPU instead of CPU for higher performance. 2023-09-18 23:04:49 -04:00
comfyanonymous
cdbbeb584d Enable pytorch attention by default on xpu. 2023-09-17 04:09:19 -04:00
comfyanonymous
5d9b731cf9 Support models without previews. 2023-09-16 12:59:54 -04:00
comfyanonymous
6d227d6fe0 Add cond_or_uncond array to transformer_options so hooks can check what is
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
509dd5ca26 Add DDPM sampler. 2023-09-15 19:22:47 -04:00
comfyanonymous
f45f65b6a4 This isn't used anywhere. 2023-09-15 12:03:03 -04:00
comfyanonymous
11df5713a0 Support for text encoder models that need attention_mask. 2023-09-15 02:02:05 -04:00
comfyanonymous
7211e1dc26 Set last layer on SD2.x models uses the proper indexes now.
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.

TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
cac135d12f Don't run text encoders on xpu because there are issues. 2023-09-14 12:16:07 -04:00
comfyanonymous
a3e0950ffb Only parse command line args when main.py is called. 2023-09-13 11:38:20 -04:00
comfyanonymous
c8905151e3 Don't leave very large hidden states in the clip vision output. 2023-09-12 15:09:10 -04:00