comfyanonymous
6e99d21369
Print leftover keys when using the UNETLoader.
2023-11-07 22:15:55 -05:00
comfyanonymous
7b2806fee6
Fix issue with object patches not being copied with patcher.
2023-11-07 22:15:15 -05:00
comfyanonymous
cb22d11fc8
Code refactor.
2023-11-07 19:33:40 -05:00
comfyanonymous
397acd549a
Fix unet ops not entirely on GPU.
2023-11-07 04:30:37 -05:00
comfyanonymous
4d21372152
Add: advanced->model->ModelSamplingDiscrete node.
...
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
842ac7fb1b
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
01e37204ed
Make SDTokenizer class work with more types of tokenizers.
2023-11-06 01:09:18 -05:00
gameltb
b7d50f3d80
fix unet_wrapper_function name in ModelPatcher
2023-11-05 17:11:44 +08:00
comfyanonymous
4aac40b213
Move model sampling code to comfy/model_sampling.py
2023-11-04 01:32:23 -04:00
comfyanonymous
f019f896c6
Don't convert Nan to zero.
...
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
f10036cbf7
sampler_cfg_function now gets the noisy output as argument again.
...
This should make things that use sampler_cfg_function behave like before.
Added an input argument for those that want the denoised output.
This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
cdfb16b654
Allow model or clip to be None in load_lora_for_models.
2023-11-01 20:27:20 -04:00
comfyanonymous
050c96acdf
Allow ModelSamplingDiscrete to be instantiated without a model config.
2023-11-01 19:13:03 -04:00
comfyanonymous
ff096742bd
Not used anymore.
2023-11-01 00:01:30 -04:00
comfyanonymous
be85c3408b
Fix some issues with sampling precision.
2023-10-31 23:49:29 -04:00
comfyanonymous
169b76657b
Clean up percent start/end and make controlnets work with sigmas.
2023-10-31 22:14:32 -04:00
comfyanonymous
11e03afad6
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
ab0cfba6d1
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
83a79be597
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
8eae4c0adb
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
d5caedbba2
Add --max-upload-size argument, the default is 100MB.
2023-10-29 03:55:46 -04:00
comfyanonymous
9533904e39
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
3ad424ff47
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
d4bc91d58f
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
817a182bac
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
fbdca5341d
Faster clip image processing.
2023-10-26 01:53:01 -04:00
comfyanonymous
f083f6b663
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
62f16ae274
Fix uni_pc returning noisy image when steps <= 3
2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
95f137a819
change 'c_adm' to 'y' in ControlNet.get_control
2023-10-25 08:24:32 -05:00
comfyanonymous
62a4b04e7f
Pass extra conds directly to unet.
2023-10-25 00:07:53 -04:00
comfyanonymous
141c4ffcba
Refactor to make it easier to add custom conds to models.
2023-10-24 23:31:12 -04:00
comfyanonymous
5ee8c5fafc
Sampling code refactor to make it easier to add more conds.
2023-10-24 03:38:41 -04:00
comfyanonymous
7f861d49fd
Empty the cache when torch cache is more than 25% free mem.
2023-10-22 13:58:12 -04:00
comfyanonymous
c73b5fab20
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
57381b0892
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
4acad03054
Fix t2i adapter issue.
2023-10-21 20:31:24 -04:00
comfyanonymous
5a9a1a50af
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
370c837794
Fix uni_pc sampler math. This changes the images this sampler produces.
2023-10-20 04:16:53 -04:00
comfyanonymous
f18406d838
Make sure cond_concat is on the right device.
2023-10-19 01:14:25 -04:00
comfyanonymous
516c334a26
Refactor cond_concat into conditioning.
2023-10-18 20:36:58 -04:00
comfyanonymous
23abd3ec84
Fix some potential issues.
2023-10-18 19:48:36 -04:00
comfyanonymous
5cf44c22ad
Refactor cond_concat into model object.
2023-10-18 16:48:37 -04:00
comfyanonymous
1b4650e307
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous
459787f78f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
28b98a96d3
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
daabf7fd3a
Add some Quadro cards to the list of cards with broken fp16.
2023-10-16 16:48:46 -04:00
comfyanonymous
bffd427388
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
db653f4908
Add a --bf16-unet to test running the unet in bf16.
2023-10-13 14:51:10 -04:00
comfyanonymous
728139a5b9
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
494ddf7717
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00