Commit Graph

582 Commits

Author SHA1 Message Date
comfyanonymous
eeceef948d Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
613c21b071 Make it easy for models to process the unet state dict on load. 2023-11-20 23:17:53 -05:00
comfyanonymous
476120a5a8 percent_to_sigma now returns a float instead of a tensor. 2023-11-18 23:20:29 -05:00
comfyanonymous
c013b8e94c Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
848ff2f09d Add support for loading SSD1B diffusers unet version.
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7676c8f41a Make deep shrink behave like it should. 2023-11-16 15:26:28 -05:00
comfyanonymous
3d07161f42 Fix potential issues. 2023-11-16 14:59:54 -05:00
comfyanonymous
87d2cb7b0b Print warning when controlnet can't be applied instead of crashing. 2023-11-16 12:57:12 -05:00
comfyanonymous
5514c4b8ce Invert the start and end percentages in the code.
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.

percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.

Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
e5eeb33ac4 heunpp2 sampler. 2023-11-14 23:50:55 -05:00
comfyanonymous
35b7304ac5 Fix last pr. 2023-11-14 14:41:31 -05:00
comfyanonymous
6d974664f0 Merge branch 'master' of https://github.com/Jannchie/ComfyUI 2023-11-14 14:38:07 -05:00
comfyanonymous
95a0f5b46b Make bislerp work on GPU. 2023-11-14 11:38:36 -05:00
comfyanonymous
702395d4ad Clean up and refactor sampler code.
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
58a23fc106 fix: adaptation to older versions of pytroch 2023-11-14 14:32:05 +09:00
comfyanonymous
f6333be536 Add a way to add patches to the input block. 2023-11-14 00:08:12 -05:00
comfyanonymous
9f546f0cb3 Disable xformers when it can't load properly. 2023-11-13 12:31:10 -05:00
comfyanonymous
b416b4f00f Make memory estimation aware of model dtype. 2023-11-12 04:28:26 -05:00
comfyanonymous
21ea9c3263 Allow different models to estimate memory usage differently. 2023-11-12 04:03:52 -05:00
comfyanonymous
56fac7fde1 sampling_function now has the model object as the argument. 2023-11-12 03:45:10 -05:00
comfyanonymous
d4b8742089 Remove useless argument from uni_pc sampler. 2023-11-12 01:25:33 -05:00
comfyanonymous
3520471621 Fix bug. 2023-11-11 12:20:16 -05:00
comfyanonymous
b8824accfa Add option to use in place weight updating in ModelPatcher. 2023-11-11 01:11:12 -05:00
comfyanonymous
c3caf2b8aa Refactor. 2023-11-11 01:11:06 -05:00
comfyanonymous
5b4cacf352 Working RescaleCFG node.
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous
67744beb00 Fix model merge bug.
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous
43dcfcd754 Support lcm models.
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous
591d338ce3 Add support for full diff lora keys. 2023-11-08 22:05:31 -05:00
comfyanonymous
248d1111de Add a CONDConstant for passing non tensor conds to unet. 2023-11-08 01:59:09 -05:00
comfyanonymous
0d52957c1d Fix typo. 2023-11-07 23:41:55 -05:00
comfyanonymous
6e99d21369 Print leftover keys when using the UNETLoader. 2023-11-07 22:15:55 -05:00
comfyanonymous
7b2806fee6 Fix issue with object patches not being copied with patcher. 2023-11-07 22:15:15 -05:00
comfyanonymous
cb22d11fc8 Code refactor. 2023-11-07 19:33:40 -05:00
comfyanonymous
397acd549a Fix unet ops not entirely on GPU. 2023-11-07 04:30:37 -05:00
comfyanonymous
4d21372152 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
842ac7fb1b CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
01e37204ed Make SDTokenizer class work with more types of tokenizers. 2023-11-06 01:09:18 -05:00
gameltb
b7d50f3d80 fix unet_wrapper_function name in ModelPatcher 2023-11-05 17:11:44 +08:00
comfyanonymous
4aac40b213 Move model sampling code to comfy/model_sampling.py 2023-11-04 01:32:23 -04:00
comfyanonymous
f019f896c6 Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
f10036cbf7 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
cdfb16b654 Allow model or clip to be None in load_lora_for_models. 2023-11-01 20:27:20 -04:00
comfyanonymous
050c96acdf Allow ModelSamplingDiscrete to be instantiated without a model config. 2023-11-01 19:13:03 -04:00
comfyanonymous
ff096742bd Not used anymore. 2023-11-01 00:01:30 -04:00
comfyanonymous
be85c3408b Fix some issues with sampling precision. 2023-10-31 23:49:29 -04:00
comfyanonymous
169b76657b Clean up percent start/end and make controlnets work with sigmas. 2023-10-31 22:14:32 -04:00
comfyanonymous
11e03afad6 Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
ab0cfba6d1 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
83a79be597 Fix some memory issues in sub quad attention. 2023-10-30 15:30:49 -04:00
comfyanonymous
8eae4c0adb Fix some OOM issues with split attention. 2023-10-30 13:14:11 -04:00