comfyanonymous
842ac7fb1b
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
01e37204ed
Make SDTokenizer class work with more types of tokenizers.
2023-11-06 01:09:18 -05:00
gameltb
b7d50f3d80
fix unet_wrapper_function name in ModelPatcher
2023-11-05 17:11:44 +08:00
comfyanonymous
4aac40b213
Move model sampling code to comfy/model_sampling.py
2023-11-04 01:32:23 -04:00
comfyanonymous
f019f896c6
Don't convert Nan to zero.
...
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
f10036cbf7
sampler_cfg_function now gets the noisy output as argument again.
...
This should make things that use sampler_cfg_function behave like before.
Added an input argument for those that want the denoised output.
This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
cdfb16b654
Allow model or clip to be None in load_lora_for_models.
2023-11-01 20:27:20 -04:00
comfyanonymous
050c96acdf
Allow ModelSamplingDiscrete to be instantiated without a model config.
2023-11-01 19:13:03 -04:00
comfyanonymous
ff096742bd
Not used anymore.
2023-11-01 00:01:30 -04:00
comfyanonymous
be85c3408b
Fix some issues with sampling precision.
2023-10-31 23:49:29 -04:00
comfyanonymous
169b76657b
Clean up percent start/end and make controlnets work with sigmas.
2023-10-31 22:14:32 -04:00
comfyanonymous
11e03afad6
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
ab0cfba6d1
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
83a79be597
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
8eae4c0adb
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
d5caedbba2
Add --max-upload-size argument, the default is 100MB.
2023-10-29 03:55:46 -04:00
comfyanonymous
9533904e39
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
3ad424ff47
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
d4bc91d58f
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
817a182bac
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
fbdca5341d
Faster clip image processing.
2023-10-26 01:53:01 -04:00
comfyanonymous
f083f6b663
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
62f16ae274
Fix uni_pc returning noisy image when steps <= 3
2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
95f137a819
change 'c_adm' to 'y' in ControlNet.get_control
2023-10-25 08:24:32 -05:00
comfyanonymous
62a4b04e7f
Pass extra conds directly to unet.
2023-10-25 00:07:53 -04:00
comfyanonymous
141c4ffcba
Refactor to make it easier to add custom conds to models.
2023-10-24 23:31:12 -04:00
comfyanonymous
5ee8c5fafc
Sampling code refactor to make it easier to add more conds.
2023-10-24 03:38:41 -04:00
comfyanonymous
7f861d49fd
Empty the cache when torch cache is more than 25% free mem.
2023-10-22 13:58:12 -04:00
comfyanonymous
c73b5fab20
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
57381b0892
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
4acad03054
Fix t2i adapter issue.
2023-10-21 20:31:24 -04:00
comfyanonymous
5a9a1a50af
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
370c837794
Fix uni_pc sampler math. This changes the images this sampler produces.
2023-10-20 04:16:53 -04:00
comfyanonymous
f18406d838
Make sure cond_concat is on the right device.
2023-10-19 01:14:25 -04:00
comfyanonymous
516c334a26
Refactor cond_concat into conditioning.
2023-10-18 20:36:58 -04:00
comfyanonymous
23abd3ec84
Fix some potential issues.
2023-10-18 19:48:36 -04:00
comfyanonymous
5cf44c22ad
Refactor cond_concat into model object.
2023-10-18 16:48:37 -04:00
comfyanonymous
1b4650e307
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
comfyanonymous
459787f78f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
28b98a96d3
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
daabf7fd3a
Add some Quadro cards to the list of cards with broken fp16.
2023-10-16 16:48:46 -04:00
comfyanonymous
bffd427388
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
db653f4908
Add a --bf16-unet to test running the unet in bf16.
2023-10-13 14:51:10 -04:00
comfyanonymous
728139a5b9
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
494ddf7717
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
18e4504de7
Pull some small changes from the other repo.
2023-10-11 20:38:48 -04:00
comfyanonymous
95df4b6174
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
comfyanonymous
c60864b5e4
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
e095eadda5
Let unet wrapper functions have .to attributes.
2023-10-11 01:34:38 -04:00
comfyanonymous
9c8d53db6c
Cleanup.
2023-10-10 21:46:53 -04:00
comfyanonymous
fc126a3d33
Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI
2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
74dc9ecc66
Supports TAESD models in safetensors format
2023-10-10 13:21:44 +09:00
comfyanonymous
58c5545dcf
Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI
2023-10-09 01:53:29 -04:00
comfyanonymous
4456b4d137
load_checkpoint_guess_config can now optionally output the model.
2023-10-06 13:48:18 -04:00
Jairo Correa
26a1afbcfe
Option to input directory
2023-10-04 19:45:15 -03:00
City
ee2d8e46e1
Fix quality loss due to low precision
2023-10-04 15:40:59 +02:00
badayvedat
d4aa26b684
fix: typo in extra sampler
2023-09-29 06:09:59 +03:00
comfyanonymous
eeadcff352
Add SamplerDPMPP_2M_SDE node.
2023-09-28 21:56:23 -04:00
comfyanonymous
c59ff7dc9b
Print missing VAE keys.
2023-09-28 00:54:57 -04:00
comfyanonymous
97bd301d8f
Add missing samplers to KSamplerSelect.
2023-09-28 00:17:03 -04:00
comfyanonymous
75a26ed5ee
Add a SamplerCustom Node.
...
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
de1158883e
Refactor sampling related code.
2023-09-27 16:45:22 -04:00
comfyanonymous
23ed2b1654
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
comfyanonymous
67a9216de5
Scheduler code refactor.
2023-09-26 17:07:07 -04:00
comfyanonymous
cbeebba747
Sampling code refactor.
2023-09-26 13:45:15 -04:00
comfyanonymous
b1650c89ce
Support more controlnet models.
2023-09-23 18:47:46 -04:00
comfyanonymous
68569d26df
Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI
2023-09-23 00:57:17 -04:00
Simon Lui
47164eb065
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
2023-09-22 21:11:27 -07:00
comfyanonymous
a990441ba0
Add a way to set output block patches to modify the h and hsp.
2023-09-22 20:26:47 -04:00
comfyanonymous
02f4208e1f
Allow having a different pooled output for each image in a batch.
2023-09-21 01:14:42 -04:00
comfyanonymous
795f5b3163
Only do the cast on the device if the device supports it.
2023-09-20 17:52:41 -04:00
comfyanonymous
e67a083a9f
Don't depend on torchvision.
2023-09-19 13:12:47 -04:00
MoonRide303
20d8e318c5
Added support for lanczos scaling
2023-09-19 10:40:38 +02:00
comfyanonymous
a6511d69b0
Do lora cast on GPU instead of CPU for higher performance.
2023-09-18 23:04:49 -04:00
comfyanonymous
cdbbeb584d
Enable pytorch attention by default on xpu.
2023-09-17 04:09:19 -04:00
comfyanonymous
5d9b731cf9
Support models without previews.
2023-09-16 12:59:54 -04:00
comfyanonymous
6d227d6fe0
Add cond_or_uncond array to transformer_options so hooks can check what is
...
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
509dd5ca26
Add DDPM sampler.
2023-09-15 19:22:47 -04:00
comfyanonymous
f45f65b6a4
This isn't used anywhere.
2023-09-15 12:03:03 -04:00
comfyanonymous
11df5713a0
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
7211e1dc26
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
cac135d12f
Don't run text encoders on xpu because there are issues.
2023-09-14 12:16:07 -04:00
comfyanonymous
a3e0950ffb
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
c8905151e3
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
36cc11edbd
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
ccc2f830d7
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
4eef469698
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
c4c69f8cc3
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
c40f363254
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
b58288668f
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
ef0c0892f6
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
2f03d19888
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
f63ffd1bb1
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
809db6d52f
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
1148c2dec7
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
9f5ce6652b
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
e68beb56e4
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
b8f3570a1b
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
39ca2da00c
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
a0578c5470
Fix controlnet issue.
2023-08-31 15:16:58 -04:00