comfyanonymous
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
2024-08-07 15:08:39 -04:00
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
comfyanonymous
cb7c4b4be3
Workaround for lora OOM on lowvram mode.
2024-08-07 14:30:54 -04:00
comfyanonymous
1208863eca
Fix "Comfy" lora keys.
...
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
2024-08-07 13:49:31 -04:00
comfyanonymous
e1c528196e
Fix bundled embed.
2024-08-07 13:30:45 -04:00
comfyanonymous
17030fd4c0
Support for "Comfy" lora format.
...
The keys are just: model.full.model.key.name.lora_up.weight
It is supported by all comfyui supported models.
Now people can just convert loras to this format instead of having to ask
for me to implement them.
2024-08-07 13:18:32 -04:00
comfyanonymous
c19dcd362f
Controlnet code refactor.
2024-08-07 12:59:28 -04:00
comfyanonymous
1c08bf35b4
Support format for embeddings bundled in loras.
2024-08-07 03:45:25 -04:00
doctorpangloss
963ede9867
Fix catastrophic indentation bug
2024-08-06 23:14:24 -07:00
doctorpangloss
7074f3191d
Fix some relative path issues
2024-08-06 21:57:57 -07:00
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
comfyanonymous
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
2024-08-06 03:22:39 -04:00
comfyanonymous
2d75df45e6
Flux tweak memory usage.
2024-08-05 21:58:28 -04:00
doctorpangloss
8ab6b4b697
Fix running on CPU again
2024-08-05 17:20:28 -07:00
doctorpangloss
b00964ace4
Further modifications to paths
2024-08-05 17:06:18 -07:00
doctorpangloss
39c6335331
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-05 16:13:20 -07:00
doctorpangloss
2bc95c1711
Test improvements and fixes
...
- move workflows to distinct json files
- add the comfy-org workflows for testing
- fix issues where workflows from windows users would not be compatible
with backends running on linux or macos in light of separator
differences. Because this codebase uses get_or_download wherever
checkpoints, models, etc. are used, this is the only place where the
comparison is gracefully handled for downloading. Validation code
will correctly convert backslashes to forward slashes, assuming that
100% of the places they are used and when comparing with a list, they
are intended to be paths and not strict symbols
2024-08-05 15:55:46 -07:00
comfyanonymous
8edbcf5209
Improve performance on some lowend GPUs.
2024-08-05 16:24:04 -04:00
a-One-Fan
a178e25912
Fix Flux FP64 math on XPU ( #4210 )
2024-08-05 01:26:20 -04:00
comfyanonymous
78e133d041
Support simple diffusers Flux loras.
2024-08-04 22:05:48 -04:00
Silver
7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' ( #4200 )
2024-08-04 17:10:02 -04:00
comfyanonymous
3b71f84b50
ONNX tracing fixes.
2024-08-04 15:45:43 -04:00
comfyanonymous
0a6b008117
Fix issue with some custom nodes.
2024-08-04 10:03:33 -04:00
comfyanonymous
f7a5107784
Fix crash.
2024-08-03 16:55:38 -04:00
comfyanonymous
91be9c2867
Tweak lowvram memory formula.
2024-08-03 16:44:50 -04:00
comfyanonymous
03c5018c98
Lower lowvram memory to 1/3 of free memory.
2024-08-03 15:14:07 -04:00
comfyanonymous
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
comfyanonymous
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
comfyanonymous
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
2024-08-03 12:39:33 -04:00
comfyanonymous
63a7e8edba
More aggressive batch splitting.
2024-08-03 11:53:30 -04:00
comfyanonymous
ea03c9dcd2
Better per model memory usage estimations.
2024-08-02 18:09:24 -04:00
comfyanonymous
3a9ee995cf
Tweak regular SD memory formula.
2024-08-02 17:34:30 -04:00
comfyanonymous
47da42d928
Better Flux vram estimation.
2024-08-02 17:02:35 -04:00
doctorpangloss
c348b37b7c
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-02 10:55:02 -07:00
Benjamin Berman
c6186bd97e
Fix pylint error
2024-08-01 21:27:16 -07:00
Alexander Brown
ce9ac2fe05
Fix clip_g/clip_l mixup ( #4168 )
2024-08-01 21:40:56 -04:00
comfyanonymous
e638f2858a
Hack to make all resolutions work on Flux models.
2024-08-01 21:39:18 -04:00
doctorpangloss
d9ba795385
Fixes for tests and completing merge
...
- huggingface cache is now better used on platforms that support
symlinking and the files you are requesting already exist in the
cache
- absolute imports were changed to relative in the correct places
- StringEnumRequestParameter has a special case in validation
- fix model_management whitespace issue
- fix comfy.ops references
2024-08-01 18:28:51 -07:00
doctorpangloss
a44a039661
Fix pylint
2024-08-01 16:28:24 -07:00
doctorpangloss
0a1ae64b0b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-01 16:19:11 -07:00
comfyanonymous
d420bc792a
Tweak the memory usage formulas for Flux and SD.
2024-08-01 17:53:45 -04:00
comfyanonymous
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
comfyanonymous
1c61361fd2
Fast preview support for Flux.
2024-08-01 16:28:11 -04:00
comfyanonymous
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
comfyanonymous
48eb1399c0
Try to fix mac issue.
2024-08-01 13:41:27 -04:00
comfyanonymous
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
2024-08-01 13:30:51 -04:00
comfyanonymous
f2b80f95d2
Better Mac support on flux model.
2024-08-01 13:10:50 -04:00
comfyanonymous
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
comfyanonymous
eb96c3bd82
Fix .sft file loading (they are safetensors files).
2024-08-01 11:32:58 -04:00
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
comfyanonymous
8d34211a7a
Fix old python versions no longer working.
2024-08-01 09:57:20 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
2024-08-01 09:42:17 -04:00
comfyanonymous
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
2024-08-01 03:58:58 -04:00
doctorpangloss
603b8a59c8
Improve interaction with controlnet_aux
2024-07-30 23:11:04 -07:00
comfyanonymous
c24f897352
Fix to get fp8 working on T5 base.
2024-07-31 02:00:19 -04:00
comfyanonymous
a5991a7aa6
Fix hunyuan dit text encoder weights always being in fp32.
2024-07-31 01:34:57 -04:00
comfyanonymous
2c038ccef0
Lower CLIP memory usage by a bit.
2024-07-31 01:32:35 -04:00
comfyanonymous
b85216a3c0
Lower T5 memory usage by a few hundred MB.
2024-07-31 00:52:34 -04:00
doctorpangloss
01f2ecbfb1
Improve messaging
2024-07-30 17:10:06 -07:00
doctorpangloss
ce5fe01768
Improve performance and memory management of upscale models, improve messaging on models loaded and unloaded from the GPU
2024-07-30 17:05:53 -07:00
comfyanonymous
82cae45d44
Fix potential issue with non clip text embeddings.
2024-07-30 14:41:13 -04:00
doctorpangloss
a94cd0b626
Fix pylint issues
2024-07-30 11:40:03 -07:00
doctorpangloss
4e851e2cfa
Fix torch 2.4.0 fix
2024-07-30 11:17:32 -07:00
doctorpangloss
34522e0914
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-30 11:11:45 -07:00
comfyanonymous
25853d0be8
Use common function for casting weights to input.
2024-07-30 10:49:14 -04:00
comfyanonymous
79040635da
Remove unnecessary code.
2024-07-30 05:01:34 -04:00
comfyanonymous
66d35c07ce
Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
...
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
2024-07-29 20:48:50 -04:00
doctorpangloss
3dc2942119
README now documents REST API usage with examples. Also always enable Save (API)
2024-07-29 08:01:19 -07:00
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
2024-07-28 01:19:20 -04:00
comfyanonymous
cf4418b806
Don't treat Bert model like CLIP.
...
Bert can accept up to 512 tokens so any prompt with more than 77 should
just be passed to it as is instead of splitting it up like CLIP.
2024-07-26 13:08:12 -04:00
comfyanonymous
8328a2d8cd
Let hunyuan dit work with all prompt lengths.
2024-07-26 12:11:32 -04:00
comfyanonymous
afe732bef9
Hunyuan dit can now accept longer prompts.
2024-07-26 11:52:58 -04:00
comfyanonymous
a9ac56fc0d
Own BertModel implementation that works with lowvram.
2024-07-26 04:47:17 -04:00
comfyanonymous
25b51b1a8b
Hunyuan DiT lora support.
2024-07-25 22:42:54 -04:00
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. ( #4102 )
...
* Let tokenizers return weights to be stored in the saved checkpoint.
* Basic hunyuan dit implementation.
* Fix some resolutions not working.
* Support hydit checkpoint save.
* Init with right dtype.
* Switch to optimized attention in pooler.
* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
comfyanonymous
f87810cd3e
Let tokenizers return weights to be stored in the saved checkpoint.
2024-07-25 10:52:09 -04:00
comfyanonymous
10c919f4c7
Make it possible to load tokenizer data from checkpoints.
2024-07-24 16:43:53 -04:00
comfyanonymous
10b43ceea5
Remove duplicate code.
2024-07-24 01:12:59 -04:00
doctorpangloss
5c5e101ba3
Fix uv support and better protobuf spec
2024-07-23 17:03:25 -07:00
comfyanonymous
0a4c49c57c
Support MT5.
2024-07-23 15:35:28 -04:00
comfyanonymous
88ed893034
Allow SPieceTokenizer to load model from a byte string.
2024-07-23 14:17:42 -04:00
comfyanonymous
334ba48cea
More generic unet prefix detection code.
2024-07-23 14:13:32 -04:00
comfyanonymous
14764aa2e2
Rename LLAMATokenizer to SPieceTokenizer.
2024-07-22 12:21:45 -04:00
comfyanonymous
b2c995f623
"auto" type is only relevant to the SetUnionControlNetType node.
2024-07-22 11:30:38 -04:00
Chenlei Hu
4151fbfa8a
Add error message on union controlnet ( #4081 )
2024-07-22 11:27:32 -04:00
Benjamin Berman
a7a84fe747
Merge branch 'add-group-templating' of https://github.com/hku/ComfyUI into pr-2788
2024-07-21 14:37:12 -07:00
comfyanonymous
95fa9545f1
Only append zero to noise schedule if last sigma isn't zero.
2024-07-20 12:37:30 -04:00
Benjamin Berman
cf9fdc5feb
Traversable differences between python 3.10 and 3.11
2024-07-19 22:20:39 -07:00
doctorpangloss
87ab9d42d0
Merge branch 'execution_model_inversion' of github.com:guill/ComfyUI into pr-execution
2024-07-19 17:49:41 -07:00
comfyanonymous
6ab8cad22e
Implement beta sampling scheduler.
...
It is based on: https://arxiv.org/abs/2407.12173
Add "beta" to the list of schedulers and the BetaSamplingScheduler node.
2024-07-19 18:05:09 -04:00
doctorpangloss
499545c373
Improve requirements.txt for faster installation, improve validation error reporting
2024-07-19 09:16:18 -07:00
doctorpangloss
0c34c2b99d
Fix #13 audio nodes now work and test correctly
2024-07-18 17:15:44 -07:00
doctorpangloss
cc99d89ac6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-18 16:31:21 -07:00
doctorpangloss
baf19e8563
Fix downloading URL files
2024-07-18 13:26:20 -07:00
doctorpangloss
5459cfa832
Improve model downloading, add FolderPaths object for custom nodes
2024-07-17 17:20:47 -07:00
喵哩个咪
855789403b
support clip-vit-large-patch14-336 ( #4042 )
...
* support clip-vit-large-patch14-336
* support clip-vit-large-patch14-336
2024-07-17 13:12:50 -04:00
comfyanonymous
6f7869f365
Get clip vision image size from config.
2024-07-17 13:05:38 -04:00
comfyanonymous
281ad42df4
Fix lowvram union controlnet bug.
2024-07-17 10:16:31 -04:00
doctorpangloss
d98c2c5456
Fix missing __init__.py
2024-07-16 15:30:59 -07:00
Thomas Ward
c5a48b15bd
Make default hash lib configurable without code changes via CLI argument ( #3947 )
...
* cli_args: Add --duplicate-check-hash-function.
* server.py: compare_image_hash configurable hash function
Uses an argument added in cli_args to specify the type of hashing to default to for duplicate hash checking. Uses an `eval()` to identify the specific hashlib class to utilize, but ultimately safely operates because we have specific options and only those options/choices in the arg parser. So we don't have any unsafe input there.
* Add hasher() to node_helpers
* hashlib selection moved to node_helpers
* default-hashing-function instead of dupe checking hasher
This makes a default-hashing-function option instead of previous selected option.
* Use args.default_hashing_function
* Use safer handling for node_helpers.hasher()
Uses a safer handling method than `eval` to evaluate default hashing function.
* Stray parentheses are evil.
* Indentation fix.
Somehow when I hit save I didn't notice I missed a space to make indentation work proper. Oops!
2024-07-16 18:27:09 -04:00
comfyanonymous
8270c62530
Add SetUnionControlNetType to set the type of the union controlnet model.
2024-07-16 17:04:53 -04:00
doctorpangloss
72baecad87
Improve logging and tracing for validation errors
2024-07-16 12:26:30 -07:00
comfyanonymous
821f93872e
Allow model sampling to set number of timesteps.
2024-07-16 15:18:40 -04:00
Chenlei Hu
99458e8aca
Add FrontendManager to manage non-default front-end impl ( #3897 )
...
* Add frontend manager
* Add tests
* nit
* Add unit test to github CI
* Fix path
* nit
* ignore
* Add logging
* Install test deps
* Remove 'stable' keyword support
* Update test
* Add web-root arg
* Rename web-root to front-end-root
* Add test on non-exist version number
* Use repo owner/name to replace hard coded provider list
* Inline cmd args
* nit
* Fix unit test
2024-07-16 11:26:11 -04:00
doctorpangloss
a20bf8134d
Fix AuraFlow
2024-07-15 15:29:49 -07:00
comfyanonymous
1305fb294c
Refactor: Move some code to the comfy/text_encoders folder.
2024-07-15 17:36:24 -04:00
doctorpangloss
3d1d833e6f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-15 14:22:49 -07:00
doctorpangloss
15d21b66d7
Fix stray folder_paths file; improve node importing errors
2024-07-15 10:46:05 -07:00
comfyanonymous
7914c47d5a
Quick fix for the promax controlnet.
2024-07-14 10:07:36 -04:00
comfyanonymous
a3dffc447a
Support AuraFlow Lora and loading model weights in diffusers format.
...
You can load model weights in diffusers format using the UNETLoader node.
2024-07-13 13:51:40 -04:00
comfyanonymous
29c2e26724
Better tokenizing code for AuraFlow.
2024-07-12 01:15:25 -04:00
comfyanonymous
8e012043a9
Add a ModelSamplingAuraFlow node to change the shift value.
...
Set the default AuraFlow shift value to 1.73 (sqrt(3)).
2024-07-11 17:57:36 -04:00
comfyanonymous
9f291d75b3
AuraFlow model implementation.
2024-07-11 16:52:26 -04:00
comfyanonymous
f45157e3ac
Fix error message never being shown.
2024-07-11 11:46:51 -04:00
comfyanonymous
5e1fced639
Cleaner support for loading different diffusion model types.
2024-07-11 11:37:31 -04:00
comfyanonymous
ffe0bb0a33
Remove useless code.
2024-07-10 20:33:12 -04:00
comfyanonymous
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous
e44fa5667f
Support returning text encoder attention masks.
2024-07-10 19:31:22 -04:00
Benjamin Berman
0b45fa8db1
Fix model downloader
2024-07-09 22:52:25 -07:00
Extraltodeus
f1a01c2c7e
Add sampler_pre_cfg_function ( #3979 )
...
* Update samplers.py
* Update model_patcher.py
2024-07-09 16:20:49 -04:00
doctorpangloss
3d67224937
Improve model downloading from Hugging Face Hub
2024-07-09 12:57:33 -07:00
comfyanonymous
ade7aa1b0c
Remove useless import.
2024-07-09 11:05:05 -04:00
comfyanonymous
faa57430b0
Controlnet union model basic implementation.
...
This is only the model code itself, it currently defaults to an empty
embedding [0] * 6 which seems to work better than treating it like a
regular controlnet.
TODO: Add nodes to select the image type.
2024-07-08 23:49:02 -04:00
doctorpangloss
da21da1d8c
The first image in the workflow should be outputted, not the last.
2024-07-08 10:21:45 -07:00
doctorpangloss
a1fee05e60
Improve configuration via files, including automatically updating configuration when configuration files change.
2024-07-08 10:01:17 -07:00
comfyanonymous
bb663bcd6c
Rename clip_t5base to t5base for stable audio text encoder.
2024-07-08 08:53:55 -04:00
comfyanonymous
2dc84d1444
Add a way to set the timestep multiplier in the flow sampling.
2024-07-06 04:06:03 -04:00
comfyanonymous
ff63893d10
Support other types of T5 models.
2024-07-06 02:42:53 -04:00
comfyanonymous
4040491149
Better T5xxl detection.
2024-07-06 00:53:33 -04:00
comfyanonymous
b8e58a9394
Cleanup T5 code a bit.
2024-07-06 00:36:49 -04:00
comfyanonymous
80c4590998
Allow specifying the padding token for the tokenizer.
2024-07-06 00:06:49 -04:00
comfyanonymous
ce649d61c0
Allow zeroing out of embeds with unused attention mask.
2024-07-05 23:48:17 -04:00
doctorpangloss
b6b97574dc
Handle CPU torch more gracefully
2024-07-05 10:47:06 -07:00
doctorpangloss
cf2eaedc5b
Fix tests
2024-07-05 10:37:28 -07:00
comfyanonymous
739b76630e
Remove useless code.
2024-07-04 15:14:13 -04:00
doctorpangloss
a13088ccec
Merge upstream
2024-07-04 11:58:55 -07:00
doctorpangloss
95d47276e9
Improve tests and distributed error notifications
...
- Tests now perform faster
- Tests will run on supported GPU platforms
- Configuration has known issues related to setting up a working
directory for an embedded client
- Introduce a Skeletonize node that solves many problems with Canny
- Improve behavior of exception reporting
2024-07-04 10:16:02 -07:00
comfyanonymous
d7484ef30c
Support loading checkpoints with the UNETLoader node.
2024-07-03 11:34:32 -04:00
comfyanonymous
537f35c7bc
Don't update dict if contiguous.
2024-07-02 20:21:51 -04:00
Alex "mcmonkey" Goodwin
3f46362d22
fix non-contiguous tensor saving (from channels-last) ( #3932 )
2024-07-02 20:16:33 -04:00
Chenlei Hu
9dd549e253
Add --no-custom-node cmd flag ( #3903 )
...
* Add --no-custom-node cmd flag
* nit
2024-07-01 17:54:03 -04:00
comfyanonymous
05e831697a
Switch to the real cfg++ method in the samplers.
...
The old _pp ones will be updated automatically to the regular ones with 2x
the cfg.
My fault for not checking what the "_pp" samplers actually did.
2024-06-29 11:59:48 -04:00
doctorpangloss
dbc2a4ba29
Disable this lint warning
2024-06-28 18:45:15 -07:00
doctorpangloss
2cad5ec0d6
LoRA test
2024-06-28 17:06:29 -07:00
comfyanonymous
264caca20e
ControlNetApplySD3 node can now be used to use SD3 controlnets.
2024-06-27 18:43:11 -04:00
comfyanonymous
f8f7568d03
Basic SD3 controlnet implementation.
...
Still missing the node to properly use it.
2024-06-27 18:43:11 -04:00
comfyanonymous
66aaa14001
Controlnet refactor.
2024-06-27 18:43:11 -04:00
comfyanonymous
8ceb5a02a3
Support saving stable audio checkpoint that can be loaded back.
2024-06-27 11:06:52 -04:00
comfyanonymous
4f9d2b057c
Remove print.
2024-06-27 02:54:15 -04:00
comfyanonymous
44947e7ad4
Add DEIS order 3 sampler.
...
Order 4 seems to give bad results.
2024-06-26 22:40:05 -04:00
comfyanonymous
69d710e40f
Implement my alternative take on CFG++ as the euler_pp sampler.
...
Add euler_ancestral_pp which is the ancestral version of euler with the
same modification.
2024-06-25 07:41:52 -04:00
comfyanonymous
73ca780019
Add SamplerEulerCFG++ node.
...
This node should match the DDIM implementation of CFG++ when "regular" is
selected.
"alternative" is a slightly different take on CFG++
2024-06-23 13:21:18 -04:00
comfyanonymous
2f360ae898
Support OneTrainer SD3 lora format.
2024-06-22 13:08:04 -04:00
comfyanonymous
4ef1479dcd
Multi dimension tiled scale function and tiled VAE audio encoding fallback.
2024-06-22 11:57:49 -04:00
comfyanonymous
1e2839f4d9
More proper tiled audio decoding.
2024-06-20 16:50:31 -04:00
doctorpangloss
a315ec1c0a
Assume JSON config files are text, utf-8 encoded
2024-06-20 10:17:04 -07:00
comfyanonymous
d5efde89b7
Add ipndm_v sampler, works best with the exponential scheduler.
2024-06-20 08:51:49 -04:00
doctorpangloss
faba95d94a
Suppress with # pylint: disable=assignment-from-none because these are legit
2024-06-19 20:47:35 -07:00
doctorpangloss
185ba7e990
Reformat model_base
2024-06-19 20:43:20 -07:00
comfyanonymous
028a583bef
Fix issue with full diffusers SD3 loras.
2024-06-19 22:32:04 -04:00
comfyanonymous
0d6a57938e
Support loading diffusers SD3 model format with UNETLoader node.
2024-06-19 22:21:18 -04:00
comfyanonymous
b08a9dd04b
Remove empty line.
2024-06-19 20:20:35 -04:00
Mario Klingemann
eee815ec99
Update sd1_clip.py ( #3684 )
...
Made token instance check more flexible so it also works with integers from numpy arrays or long tensors
2024-06-19 16:42:41 -04:00
doctorpangloss
facf68e7b9
These should not raise NotImplemented
2024-06-19 13:40:42 -07:00
doctorpangloss
92f435aba3
Fix progress updates
2024-06-19 13:40:28 -07:00
comfyanonymous
e11052afcf
Add ipndm sampler.
2024-06-19 16:32:30 -04:00
doctorpangloss
68f410b8da
Autogenerated code is too sensitive to small changes
2024-06-19 09:42:14 -07:00
comfyanonymous
3914d5a2ae
Support full SD3 loras.
2024-06-19 10:13:33 -04:00
doctorpangloss
6015c4132f
Fix more unreferenced variables
2024-06-18 21:24:10 -07:00
doctorpangloss
2aecff06ff
Fixing missing __init__.py and other errors that appear when using an IDE
2024-06-18 21:16:37 -07:00
doctorpangloss
45d553e970
When using quick_test_for_ci, raise import errors
2024-06-18 19:41:22 -07:00
comfyanonymous
a45df69570
Basic tiled decoding for audio VAE.
2024-06-17 22:48:23 -04:00
doctorpangloss
8cdc246450
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-17 16:19:48 -07:00
Janek Mann
b7c473d1ab
Fix lora keys for SimpleTuner ( #3759 )
2024-06-17 07:55:06 -04:00
Jacob Segal
9d624564fa
Merge branch 'master' into execution_model_inversion
2024-06-16 19:08:09 -07:00
Jacob Segal
8d17f3c7bf
Fix ui output for duplicated nodes
2024-06-16 18:39:24 -07:00
doctorpangloss
aab3b6c5a4
Remove unused k_diffusion utils code
2024-06-16 11:27:01 -07:00
comfyanonymous
6425252c4f
Use fp16 as the default vae dtype for the audio VAE.
2024-06-16 13:12:54 -04:00
comfyanonymous
8ddc151a4c
Squash depreciation warning on new pytorch.
2024-06-16 13:06:23 -04:00
comfyanonymous
ca9d300a80
Better estimation for memory usage during audio VAE encoding/decoding.
2024-06-16 11:47:32 -04:00
comfyanonymous
746a0410d4
Fix VAEEncode with taesd3.
2024-06-16 03:10:04 -04:00
comfyanonymous
04e8798c37
Improvements to the TAESD3 implementation.
2024-06-16 02:04:24 -04:00
Dr.Lt.Data
df7db0e027
support TAESD3 ( #3738 )
2024-06-16 02:03:53 -04:00
comfyanonymous
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
comfyanonymous
1281f933c1
Small optimization.
2024-06-15 02:44:38 -04:00
comfyanonymous
f2e844e054
Optimize some unneeded if conditions in the sampling code.
2024-06-15 02:26:19 -04:00
comfyanonymous
0ec513d877
Add a --force-channels-last to inference models in channel last mode.
2024-06-15 01:08:12 -04:00
Max Tretikov
878f10f143
Fix scheme.py errors
2024-06-14 16:47:34 -06:00
comfyanonymous
0e06b370db
Print key names for easier debugging.
2024-06-14 18:18:53 -04:00
Max Tretikov
cbe69364db
Lowercase T for torch.tensor
2024-06-14 15:58:59 -06:00
Max Tretikov
205b240a89
Add an exception if history API can't be reached
2024-06-14 14:47:54 -06:00
Max Tretikov
bd59bae606
Fix compile_core in comfy.ldm.modules.diffusionmodules.mmdit
2024-06-14 14:43:55 -06:00
Max Tretikov
891154b79e
Ensure ema is defined before operating on it
2024-06-14 14:37:50 -06:00
Max Tretikov
c364e42a11
Remove autoencoder legacy method
2024-06-14 14:34:57 -06:00
Max Tretikov
a248c4c141
Change query parameters to recommended usage of TypedDict
2024-06-14 14:33:11 -06:00
Max Tretikov
935f6c2061
Add pylint ignore for importlib.abc.Traversable
2024-06-14 14:20:07 -06:00
Max Tretikov
36326226f7
Change default value for clip_type to exception
2024-06-14 14:14:00 -06:00
Max Tretikov
54e6b82d2c
Add default value for clip_type in comfy.nodes.base_nodes
2024-06-14 14:12:45 -06:00
Max Tretikov
8b091f02de
Add xformer.ops imports
2024-06-14 14:09:46 -06:00
Max Tretikov
ee44e3b1d7
Fix errors in comfy.extra_samplers.uni_pc
2024-06-14 13:40:10 -06:00
Max Tretikov
9ad840e614
Fix import paths in comfy.cmd.main
2024-06-14 13:20:34 -06:00
Max Tretikov
d06c38d5b4
Fix logging string formatting in main_pre.py
2024-06-14 13:19:15 -06:00
Max Tretikov
ee9daa3a5c
Add NotImplementedError for LatentPreviewer.decode_latent_to_preview
2024-06-14 13:18:24 -06:00
Max Tretikov
9cf4f9830f
Coalesce directml_enabled and directml_device into one variable
2024-06-14 13:16:05 -06:00
Max Tretikov
5cd4ca9906
Fix function name dupes in comfy.cmd.server
2024-06-14 12:42:40 -06:00
Max Tretikov
3d1d67a2d7
Remove remaining classproperty decorators in comfy.api.schemas
2024-06-14 12:04:35 -06:00
Max Tretikov
122fe824ec
Recompose classproperty decorator and fix super calls on dynamic type in comfy.api.schemas.schema
2024-06-14 12:01:28 -06:00
Max Tretikov
6c53388619
Fix xformers import statements in comfy.ldm.modules.attention
2024-06-14 11:21:08 -06:00
Max Tretikov
74023da3a0
Fix several errors in comfy.vendor.appdirs
2024-06-14 11:17:03 -06:00
Max Tretikov
b273eb6146
Fix unsubscriptable errors in analytics.py
2024-06-14 11:06:26 -06:00
Max Tretikov
05f4c2a5bc
Fix no-member errors in comfy.ldm.modules.ema
2024-06-14 09:01:38 -06:00
Max Tretikov
f0812a88fe
Fix invalid-unary-operand-type in k_diffusion.sampling
2024-06-14 00:31:09 -06:00
Max Tretikov
a919272e3b
Fix errors in model_management.py
2024-06-14 00:17:47 -06:00
Max Tretikov
14da37cdf0
Fix possibly-used-before-assignment in samplers.py
2024-06-13 23:49:46 -06:00
Max Tretikov
a02c632aa8
Fix possibly-used-before-assignment in lora.py
2024-06-13 23:46:00 -06:00
Max Tretikov
5e89252ab6
Fix cv2 lint errors
2024-06-13 23:40:44 -06:00
Max Tretikov
f9c3779d4e
Ensure unet is defined
2024-06-13 23:25:36 -06:00
Max Tretikov
61b484d546
Fix lack of NotImplemented exceptions in model_base
2024-06-13 23:24:41 -06:00
Simon Lui
5eb98f0092
Exempt IPEX from non_blocking previews fixing segmentation faults. ( #3708 )
2024-06-13 18:51:14 -04:00
comfyanonymous
ac151ac169
Support SD3 diffusers lora.
2024-06-13 18:26:10 -04:00
Max Tretikov
2f12a8a790
Ensure patch_data is defined
2024-06-13 15:32:09 -06:00
Max Tretikov
d7f7d81b8a
Change tokens to tensor of type long
2024-06-13 15:25:15 -06:00
comfyanonymous
37a08a41b3
Support setting weight offsets in weight patcher.
2024-06-13 17:21:26 -04:00
Max Tretikov
63636c3355
Fix no-member of EPS inherited classes
2024-06-13 15:15:30 -06:00
Max Tretikov
c69d4cae0a
Fix all undefined variables
2024-06-13 14:49:00 -06:00
doctorpangloss
8be5134f4c
Fix remaining relative versus absolute namespace errors
2024-06-13 13:37:23 -07:00
doctorpangloss
f6388683e0
Fix issues with base_nodes
2024-06-13 13:16:25 -07:00
doctorpangloss
73a11c0dbb
Fix execution context interaction
2024-06-12 14:34:24 -07:00
doctorpangloss
cac6690481
Add known SD3 model files, merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-12 10:56:41 -07:00
comfyanonymous
605e64f6d3
Fix lowvram issue.
2024-06-12 10:39:33 -04:00
comfyanonymous
1ddf512fdc
Don't auto convert clip and vae weights to fp16 when saving checkpoint.
2024-06-12 01:07:58 -04:00
comfyanonymous
694e0b48e0
SD3 better memory usage estimation.
2024-06-12 00:49:00 -04:00
comfyanonymous
69c8d6d8a6
Single and dual clip loader nodes support SD3.
...
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
2024-06-11 17:03:26 -04:00
comfyanonymous
5889b7ca0a
Support multiple text encoder configurations on SD3.
2024-06-11 13:14:43 -04:00
doctorpangloss
b8e4fd4528
Controlnet++ checkpoints
2024-06-11 08:45:56 -07:00
comfyanonymous
9424522ead
Reuse code.
2024-06-11 07:20:26 -04:00
Dango233
73ce178021
Remove redundancy in mmdit.py ( #3685 )
2024-06-11 06:30:25 -04:00
doctorpangloss
6789e9c71e
Add Reference Only ControlNet hack, add ControlNet++ models
2024-06-10 20:36:23 -07:00
doctorpangloss
e7682ced56
Better support for transformers t5
2024-06-10 20:22:17 -07:00
doctorpangloss
a5d828be77
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-10 13:21:36 -07:00
comfyanonymous
a82fae2375
Fix bug with cosxl edit model.
2024-06-10 16:00:03 -04:00
comfyanonymous
8c4a9befa7
SD3 Support.
2024-06-10 14:06:23 -04:00
doctorpangloss
d778277a68
Merge with upstream and fix tests
2024-06-10 10:01:08 -07:00
comfyanonymous
a5e6a632f9
Support sampling non 2D latents.
2024-06-10 01:31:09 -04:00
comfyanonymous
742d5720d1
Support zeroing out text embeddings with the attention mask.
2024-06-09 16:51:58 -04:00
comfyanonymous
6cd8ffc465
Reshape the empty latent image to the right amount of channels if needed.
2024-06-08 02:35:08 -04:00
doctorpangloss
7f300bcb7a
Multi-modal LLM support and ongoing improvements to language features.
2024-06-07 16:23:10 -07:00
comfyanonymous
56333d4850
Use the end token for the text encoder attention mask.
2024-06-07 03:05:23 -04:00
doctorpangloss
6575409461
Additional chat templates to ease the use of many models.
2024-06-06 20:51:05 -07:00
doctorpangloss
ebf2ef27c7
Improve LLM / language support
2024-06-06 14:57:52 -07:00
comfyanonymous
104fcea0c8
Add function to get the list of currently loaded models.
2024-06-05 23:25:16 -04:00
comfyanonymous
b1fd26fe9e
pytorch xpu should be flash or mem efficient attention?
2024-06-04 17:44:14 -04:00
doctorpangloss
3f559135c6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-03 11:42:55 -07:00
comfyanonymous
809cc85a8e
Remove useless code.
2024-06-02 19:23:37 -04:00
comfyanonymous
b249862080
Add an annoying print to a function I want to remove.
2024-06-01 12:47:31 -04:00
doctorpangloss
5151d5b017
Include HyperSD checkpoints
2024-05-31 16:21:10 -07:00
doctorpangloss
cb557c960b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-31 07:42:11 -07:00
doctorpangloss
3125366eda
Improve compatibility with comfyui-extra-models, improve API
2024-05-30 16:50:34 -07:00
comfyanonymous
bf3e334d46
Disable non_blocking when --deterministic or directml.
2024-05-30 11:07:38 -04:00
JettHu
b26da2245f
Fix UnetParams annotation typo ( #3589 )
2024-05-27 19:30:35 -04:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
comfyanonymous
ffc4b7c30e
Fix DORA strength.
...
This is a different version of #3298 with more correct behavior.
2024-05-25 02:50:11 -04:00
comfyanonymous
efa5a711b2
Reduce memory usage when applying DORA: #3557
2024-05-24 23:36:48 -04:00
doctorpangloss
801ef2e3f0
Fix interrupt messaging, add AMD and Intel Dockerfiles
2024-05-23 22:51:44 -07:00
doctorpangloss
a79ccd625f
bf16 selection for AMD
2024-05-22 22:45:15 -07:00
doctorpangloss
35cf996b68
ROCm 6.0 seems to require get_device_name to be called before memory methods in order to return valid data
2024-05-22 22:09:07 -07:00
comfyanonymous
6c23854f54
Fix OSX latent2rgb previews.
2024-05-22 13:56:28 -04:00
Chenlei Hu
7718ada4ed
Add type annotation UnetWrapperFunction ( #3531 )
...
* Add type annotation UnetWrapperFunction
* nit
* Add types.py
2024-05-22 02:07:27 -04:00
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
doctorpangloss
b241ecc56d
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-21 11:38:24 -07:00
comfyanonymous
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
doctorpangloss
f69b6225c0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-20 12:06:35 -07:00
comfyanonymous
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
comfyanonymous
09e069ae6c
Log the pytorch version.
2024-05-20 06:22:29 -04:00
comfyanonymous
11a2ad5110
Fix controlnet not upcasting on models that have it enabled.
2024-05-19 17:58:03 -04:00
comfyanonymous
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
comfyanonymous
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
doctorpangloss
519cddcefc
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-17 14:04:44 -07:00
doctorpangloss
cb45b86b63
Patch torch device code here
2024-05-17 07:19:15 -07:00
doctorpangloss
4eb66f8a0a
Fix clip clone bug
2024-05-17 07:17:33 -07:00
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
2024-05-17 00:31:32 -04:00
doctorpangloss
87d1f30902
Some base nodes now have unit tests
2024-05-16 15:01:51 -07:00
doctorpangloss
3d98440fb7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-16 14:28:49 -07:00
doctorpangloss
5a9055fe05
Tokenizers are now shallow cloned when CLIP is cloned. This allows nodes to add vocab to the tokenizer, as some checkpoints and LoRAs may require.
2024-05-16 12:39:19 -07:00
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
comfyanonymous
2d41642716
Fix lowvram dora issue.
2024-05-15 02:47:40 -04:00
doctorpangloss
8741cb3ce8
LLM support in ComfyUI
...
- Currently uses `transformers`
- Supports model management and correctly loading and unloading models
based on what your machine can support
- Includes a Text Diffusers 2 workflow to demonstrate text rendering in
SD1.5
2024-05-14 17:30:23 -07:00
comfyanonymous
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
doctorpangloss
78e340e2d8
Traces now include the arguments for executing a node, wherever it makes sense to do so.
2024-05-13 15:48:16 -07:00
doctorpangloss
d11aed87ba
OpenAPI ImageRequestParameter node uses a Chrome user-agent to facilitate external URLs better
2024-05-13 15:03:34 -07:00
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
comfyanonymous
49c20cdc70
No longer necessary.
2024-05-12 05:34:43 -04:00
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
doctorpangloss
188eff3376
Omit spurious traces
2024-05-09 17:11:40 -07:00
doctorpangloss
779ff30c17
Provide a protocol for plugins to declare model-management-manageable models. Docs will be updated to specify that plugin authors should use ModelPatcher generally.
2024-05-09 16:07:18 -07:00
doctorpangloss
c2fa74f625
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-09 13:45:38 -07:00
doctorpangloss
881258acb6
Progress bar hooks, via the server, are now set via a context. This will be used in other places too.
2024-05-09 13:24:06 -07:00
comfyanonymous
93e876a3be
Remove warnings that confuse people.
2024-05-09 05:29:42 -04:00
doctorpangloss
464c132c50
Add basic ImageRequestParameter node
2024-05-08 16:37:26 -07:00
doctorpangloss
0d8924442a
Improve API return values and tracing reports
2024-05-08 15:52:17 -07:00
comfyanonymous
cd07340d96
Typo fix.
2024-05-08 18:36:56 -04:00
doctorpangloss
aa0cfb54ce
Refine configuration of OpenTelemetry
2024-05-07 17:04:31 -07:00
doctorpangloss
3a64e04a93
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-07 13:57:53 -07:00
doctorpangloss
f8fcfa6f08
Improve tracing to propagate to backend workers correctly when using the API. Fix distributed tests.
2024-05-07 13:44:34 -07:00
comfyanonymous
c61eadf69a
Make the load checkpoint with config function call the regular one.
...
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.
The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
doctorpangloss
75b63fce91
Remove redudant resume_download argument
2024-05-06 10:31:58 -07:00
doctorpangloss
eb7d466b95
Fix dependency on opentelemetry instrumentor; remove websocket based API example since it isn't appropriate for this fork.
2024-05-03 16:54:33 -07:00
doctorpangloss
fd81790f12
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-03 06:42:56 -07:00
doctorpangloss
330ecb10b2
Merge with upstream. Remove TLS flags, because a third party proxy will do this better
2024-05-02 21:57:20 -07:00
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
2024-05-02 03:26:50 -04:00
doctorpangloss
4d060f0555
Improve support for extra models
2024-05-01 16:58:29 -07:00
comfyanonymous
f81a6fade8
Fix some edge cases with samplers and arrays with a single sigma.
2024-05-01 17:05:30 -04:00
comfyanonymous
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
Garrett Sutula
bacce529fb
Add TLS Support ( #3312 )
...
* Add TLS Support
* Add to readme
* Add guidance for windows users on generating certificates
* Add guidance for windows users on generating certificates
* Fix typo
2024-04-30 20:17:02 -04:00
doctorpangloss
b94b90c1cc
Improve model downloader coherence with packages like controlnext-aux
2024-04-30 14:28:44 -07:00
doctorpangloss
0862863bc0
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-04-29 13:37:03 -07:00
doctorpangloss
46a712b4d6
Fix missing extension addition from upstream
2024-04-29 13:32:46 -07:00
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size ( #3353 )
2024-04-26 12:51:12 -04:00
doctorpangloss
f965fb2bc0
Merge upstream
2024-04-24 22:41:43 -07:00
comfyanonymous
8dc19e40d1
Don't init a VAE model when there are no VAE weights.
2024-04-24 09:20:31 -04:00
Jacob Segal
b3e547f22b
Merge branch 'master' into execution_model_inversion
2024-04-21 21:31:58 -07:00
Jacob Segal
06f3ce9200
Raise exception for bad get_node calls.
2024-04-21 16:10:01 -07:00
Jacob Segal
ecbef304ed
Remove superfluous function parameter
2024-04-20 23:07:18 -07:00
Jacob Segal
b5e4583583
Remove unused functionality
2024-04-20 23:01:34 -07:00
Jacob Segal
7dbee88485
Add docs on when ExecutionBlocker should be used
2024-04-20 22:54:38 -07:00
Jacob Segal
dd3bafb40b
Display an error for dependency cycles
...
Previously, dependency cycles that were created during node expansion
would cause the application to quit (due to an uncaught exception). Now,
we'll throw a proper error to the UI. We also make an attempt to 'blame'
the most relevant node in the UI.
2024-04-20 22:40:38 -07:00
Jacob Segal
5dc13651b0
Use custom exception types.
2024-04-20 18:12:42 -07:00
Jacob Segal
a0bf532558
Use fstrings instead of '%' formatting syntax
2024-04-20 17:52:23 -07:00
comfyanonymous
c59fe9f254
Support VAE without quant_conv.
2024-04-18 21:05:33 -04:00
doctorpangloss
2643730acc
Tracing
2024-04-17 08:20:07 -07:00
comfyanonymous
719fb2c81d
Add basic PAG node.
2024-04-14 23:49:50 -04:00
comfyanonymous
258dbc06c3
Fix some memory related issues.
2024-04-14 12:08:58 -04:00
comfyanonymous
58812ab8ca
Support SDXS 512 model.
2024-04-12 22:12:35 -04:00
comfyanonymous
831511a1ee
Fix issue with sampling_settings persisting across models.
2024-04-09 23:20:43 -04:00
doctorpangloss
e49c662c7f
Enable previews by default and over distributed channels
2024-04-09 13:15:05 -07:00
doctorpangloss
37cca051b6
Enable real-time progress notifications.
2024-04-08 14:56:16 -07:00
doctorpangloss
dd6f7c4215
Fix retrieving history from distributed instance
2024-04-08 14:39:16 -07:00
doctorpangloss
034ffcea03
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-04-08 10:02:37 -07:00
comfyanonymous
30abc324c2
Support properly saving CosXL checkpoints.
2024-04-08 00:36:22 -04:00
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
2024-04-06 18:38:39 -04:00
kk-89
38ed2da2dd
Fix typo in lowvram patcher ( #3209 )
2024-04-05 12:02:13 -04:00
comfyanonymous
1088d1850f
Support for CosXL models.
2024-04-05 10:53:41 -04:00
doctorpangloss
3e002b9f72
Fix string joining node, improve model downloading
2024-04-04 23:40:29 -07:00
comfyanonymous
41ed7e85ea
Fix object_patches_backup not being the same object across clones.
2024-04-05 00:22:44 -04:00
comfyanonymous
0f5768e038
Fix missing arguments in cfg_function.
2024-04-04 23:38:57 -04:00
comfyanonymous
1f4fc9ea0c
Fix issue with get_model_object on patched model.
2024-04-04 23:01:02 -04:00
comfyanonymous
1a0486bb96
Fix model needing to be loaded on GPU to generate the sigmas.
2024-04-04 22:08:49 -04:00
comfyanonymous
c6bd456c45
Make zero denoise a NOP.
2024-04-04 11:41:27 -04:00
comfyanonymous
fcfd2bdf8a
Small cleanup.
2024-04-04 11:16:49 -04:00
comfyanonymous
0542088ef8
Refactor sampler code for more advanced sampler nodes part 2.
2024-04-04 01:26:41 -04:00
comfyanonymous
57753c964a
Refactor sampling code for more advanced sampler nodes.
2024-04-03 22:09:51 -04:00
comfyanonymous
6c6a39251f
Fix saving text encoder in fp8.
2024-04-02 11:46:34 -04:00
doctorpangloss
abb952ad77
Tweak headers to accept default when none are specified
2024-04-01 20:34:55 -07:00
comfyanonymous
e6482fbbfc
Refactor calc_cond_uncond_batch into calc_cond_batch.
...
calc_cond_batch can take an arbitrary amount of cond inputs.
Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
2024-04-01 18:07:47 -04:00
comfyanonymous
575acb69e4
IP2P model loading support.
...
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.
This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
Benjamin Berman
5208618681
Merge pull request #4 from cjonesuk/patch-1
...
Allow iteration over folder paths
2024-03-30 14:25:25 -07:00
doctorpangloss
b0ab12bf05
Fix #5 TAESD node was using a bad variable name that shadowed a module in a relative import
2024-03-29 16:28:13 -07:00
doctorpangloss
bd87697fdf
ComfyUI Manager now starts successfully, but needs more mitigations:
...
- /manager/reboot needs to use a different approach to restart the
currently running Python process.
- runpy should be used for install.py invocations
2024-03-29 16:25:29 -07:00
doctorpangloss
8f548d4d19
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-03-29 13:36:57 -07:00
comfyanonymous
94a5a67c32
Cleanup to support different types of inpaint models.
2024-03-29 14:44:13 -04:00
doctorpangloss
1f705ba9d9
Fix history retrieval bug when accessing a distributed frontend
2024-03-29 10:49:51 -07:00
doctorpangloss
c6f4301e88
Fix model downloader invoking symlink when it should not
2024-03-28 15:45:04 -07:00
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
comfyanonymous
327ca1313d
Support SDXS 0.9
2024-03-27 23:58:58 -04:00
doctorpangloss
b0be335d59
Improved support for ControlNet workflows with depth
...
- ComfyUI can now load EXR files.
- There are new arithmetic nodes for floats and integers.
- EXR nodes can load depth maps and be remapped with
ImageApplyColormap. This allows end users to use ground truth depth
data from video game engines or 3D graphics tools and recolor it to
the format expected by depth ControlNets: grayscale inverse depth
maps and "inferno" colored inverse depth maps.
- Fixed license notes.
- Added an additional known ControlNet model.
- Because CV2 is now used to read OpenEXR files, an environment
variable must be set early on in the application, before CV2 is
imported. This file, main_pre, is now imported early on in more
places.
2024-03-26 22:32:15 -07:00
comfyanonymous
ae77590b4e
dora_scale support for lora file.
2024-03-25 18:09:23 -04:00
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
2024-03-24 02:36:30 -04:00
Jacob Segal
6b6a93cc5d
Merge branch 'master' into execution_model_inversion
2024-03-23 16:31:14 -07:00
doctorpangloss
d8846fcb39
Improved testing of API nodes
...
- dynamicPrompts now set to False by default; CLIPTextEncoder and
related nodes now have it set to True.
- Fixed return values of API nodes.
2024-03-22 22:04:35 -07:00
doctorpangloss
4cd8f9d2ed
Merge with upstream
2024-03-22 14:35:17 -07:00
doctorpangloss
feae8c679b
Add nodes to support OpenAPI and similar backend workflows
2024-03-22 14:22:50 -07:00
Christopher Jones
f8120bbd72
Allow iteration over folder paths
2024-03-22 17:01:17 +00:00
doctorpangloss
0db040cc47
Improve API support
...
- Removed /api/v1/images because you should use your own CDN style
image host and /view for maximum compatibility
- The /api/v1/prompts POST application/json response will now return
the outputs dictionary
- Caching has been removed
- More tests
- Subdirectory prefixes are now supported
- Fixed an issue where a Linux frontend and Windows backend would have
paths that could not interact with each other correctly
2024-03-21 16:24:22 -07:00
doctorpangloss
d73b116446
Update OpenAPI spec
2024-03-21 15:16:52 -07:00
doctorpangloss
005e370254
Merge upstream
2024-03-21 13:15:36 -07:00
comfyanonymous
0624838237
Add inverse noise scaling function.
2024-03-21 14:49:11 -04:00
doctorpangloss
59cf9e5d93
Improve distributed testing
2024-03-20 20:43:21 -07:00
comfyanonymous
5d875d77fe
Fix regression with lcm not working with batches.
2024-03-20 20:48:54 -04:00
comfyanonymous
4b9005e949
Fix regression with model merging.
2024-03-20 13:56:12 -04:00
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
2024-03-20 02:27:58 -04:00
comfyanonymous
150a3e946f
Make LCM sampler use the model noise scaling function.
2024-03-20 01:35:59 -04:00
doctorpangloss
3f4049c5f4
Fix Python 3.10 compatibility detail
2024-03-19 10:48:20 -07:00
comfyanonymous
40e124c6be
SV3D support.
2024-03-18 16:54:13 -04:00
doctorpangloss
74a9c45395
Fix subpath model downloads
2024-03-18 10:10:34 -07:00
comfyanonymous
cacb022c4a
Make saved SD1 checkpoints match more closely the official one.
2024-03-18 00:26:23 -04:00
comfyanonymous
d7897fff2c
Move cascade scale factor from stage_a to latent_formats.py
2024-03-16 14:49:35 -04:00
comfyanonymous
f2fe635c9f
SamplerDPMAdaptative node to test the different options.
2024-03-15 22:36:10 -04:00
comfyanonymous
448d9263a2
Fix control loras breaking.
2024-03-14 09:30:21 -04:00
doctorpangloss
a892411cf8
Add known controlnet models and add --disable-known-models to prevent it from appearing or downloading
2024-03-13 18:11:16 -07:00
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
2024-03-13 20:07:27 -04:00
doctorpangloss
341c9f2e90
Improvements to node loading, node API, folder paths and progress
...
- Improve node loading order. It now occurs "as late as possible".
Configuration should be exposed as per the README.
- Added methods to specify custom folders and models used in examples
more robustly for custom nodes.
- Downloading models can now be gracefully interrupted.
- Progress notifications are now sent over the network for distributed
ComfyUI operations.
- Python objects have been moved around to prevent less transitive
package importing issues.
2024-03-13 16:14:18 -07:00
doctorpangloss
3ccbda36da
Adjust known models
2024-03-12 15:51:57 -07:00
doctorpangloss
e68f8885e3
Improve model downloading
2024-03-12 15:27:08 -07:00
doctorpangloss
93cdef65a4
Merge upstream
2024-03-12 09:49:47 -07:00
comfyanonymous
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
doctorpangloss
00728eb20f
Merge upstream
2024-03-11 09:32:57 -07:00
Benjamin Berman
3c57ef831c
Download known models from HuggingFace
2024-03-11 00:15:06 -07:00
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
2024-03-10 12:14:23 -04:00
doctorpangloss
175a50d7ba
Improve vanilla node importing
2024-03-08 16:29:48 -08:00
doctorpangloss
c0d9bc0129
Merge with upstream
2024-03-08 15:17:20 -08:00
comfyanonymous
5f60ee246e
Support loading the sr cascade controlnet.
2024-03-07 01:22:48 -05:00
comfyanonymous
03e6e81629
Set upscale algorithm to bilinear for stable cascade controlnet.
2024-03-06 02:59:40 -05:00
comfyanonymous
03e83bb5d0
Support stable cascade canny controlnet.
2024-03-06 02:25:42 -05:00
comfyanonymous
10860bcd28
Add compression_ratio to controlnet code.
2024-03-05 15:15:20 -05:00
comfyanonymous
478f71a249
Remove useless check.
2024-03-04 08:51:25 -05:00
comfyanonymous
12c1080ebc
Simplify differential diffusion code.
2024-03-03 15:34:42 -05:00
Shiimizu
727021bdea
Implement Differential Diffusion ( #2876 )
...
* Implement Differential Diffusion
* Cleanup.
* Fix.
* Masks should be applied at full strength.
* Fix colors.
* Register the node.
* Cleaner code.
* Fix issue with getting unipc sampler.
* Adjust thresholds.
* Switch to linear thresholds.
* Only calculate nearest_idx on valid thresholds.
2024-03-03 15:34:13 -05:00
comfyanonymous
1abf8374ec
utils.set_attr can now be used to set any attribute.
...
The old set_attr has been renamed to set_attr_param.
2024-03-02 17:27:23 -05:00
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
2024-03-02 17:16:31 -05:00
comfyanonymous
51df846598
Let conditioning specify custom concat conds.
2024-03-02 11:44:06 -05:00
comfyanonymous
9f71e4b62d
Let model patches patch sub objects.
2024-03-02 11:43:27 -05:00
comfyanonymous
00425563c0
Cleanup: Use sampling noise scaling function for inpainting.
2024-03-01 14:24:41 -05:00
comfyanonymous
c62e836167
Move noise scaling to object with sampling math.
2024-03-01 12:54:38 -05:00
doctorpangloss
148d57a772
Add extra_model_paths_config to the valid configuration for the comfyui-worker entry point
2024-03-01 08:14:21 -08:00
doctorpangloss
440f72d36f
support differential diffusion nodes
2024-03-01 00:43:33 -08:00
doctorpangloss
915f2da874
Merge upstream
2024-02-29 20:48:27 -08:00
doctorpangloss
44882eab0c
Improve typing and file path handling
2024-02-29 19:29:38 -08:00
Benjamin Berman
e6623a1359
Fix CLIPLoader node, fix CustomNode typing, improve digest
2024-02-29 15:54:42 -08:00
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
2024-02-29 13:11:30 -05:00
doctorpangloss
bae2068111
Fix issue when custom_nodes directory does not exist
2024-02-28 17:46:23 -08:00
doctorpangloss
9d3eb74796
Fix importing vanilla custom nodes
2024-02-28 14:11:34 -08:00
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
2024-02-28 12:10:11 -05:00
comfyanonymous
37a86e4618
Remove duplicate text_projection key from some saved models.
2024-02-28 03:57:41 -05:00
Benjamin Berman
e0e98a8783
Update distributed_prompt_worker.py
2024-02-27 23:15:40 -08:00
doctorpangloss
d3c13f8172
(Fixed) Move new_updated
2024-02-27 17:01:16 -08:00
doctorpangloss
272e3ee357
(broken) Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-27 16:58:34 -08:00
comfyanonymous
8daedc5bf2
Auto detect playground v2.5 model.
2024-02-27 18:03:03 -05:00
comfyanonymous
d46583ecec
Playground V2.5 support with ModelSamplingContinuousEDM node.
...
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
2024-02-27 15:12:33 -05:00
comfyanonymous
1e0fcc9a65
Make XL checkpoints save in a more standard format.
2024-02-27 02:07:40 -05:00
comfyanonymous
b416be7d78
Make the text projection saved in the checkpoint the right format.
2024-02-27 01:52:23 -05:00
comfyanonymous
03c47fc0f2
Add a min_length property to tokenizer class.
2024-02-26 21:36:37 -05:00
doctorpangloss
bd5073caf2
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-26 08:51:07 -08:00
comfyanonymous
8ac69f62e5
Make return_projected_pooled setable from the __init__
2024-02-25 14:49:13 -05:00
comfyanonymous
ca7c310a0e
Support loading old CLIP models saved with CLIPSave.
2024-02-25 08:29:12 -05:00
comfyanonymous
c2cb8e889b
Always return unprojected pooled output for gligen.
2024-02-25 07:33:13 -05:00
Jacob Segal
6d09dd70f8
Make custom VALIDATE_INPUTS skip normal validation
...
Additionally, if `VALIDATE_INPUTS` takes an argument named `input_types`,
that variable will be a dictionary of the socket type of all incoming
connections. If that argument exists, normal socket type validation will
not occur. This removes the last hurdle for enabling variant types
entirely from custom nodes, so I've removed that command-line option.
I've added appropriate unit tests for these changes.
2024-02-24 23:17:01 -08:00
comfyanonymous
1cb3f6a83b
Move text projection into the CLIP model code.
...
Fix issue with not loading the SSD1B clip correctly.
2024-02-25 01:41:08 -05:00
comfyanonymous
6533b172c1
Support text encoder text_projection in lora.
2024-02-24 23:50:46 -05:00
comfyanonymous
1e5f0f66be
Support lora keys with lora_prior_unet_ and lora_prior_te_
2024-02-23 12:21:20 -05:00
logtd
e1cb93c383
Fix model and cond transformer options merge
2024-02-23 01:19:43 -07:00
comfyanonymous
10847dfafe
Cleanup uni_pc inpainting.
...
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
2024-02-23 02:39:35 -05:00
doctorpangloss
dd1f7b6183
Adapt to more custom nodes specifications
2024-02-22 20:05:48 -08:00
doctorpangloss
b95fd25380
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-22 12:06:41 -08:00
doctorpangloss
fca0d8a050
Plugins can add configuration
2024-02-22 11:58:40 -08:00
doctorpangloss
c941ee09fc
Improve nodes handling
2024-02-21 23:44:16 -08:00
Jacob Segal
5ab1565418
Merge branch 'master' into execution_model_inversion
2024-02-21 19:41:09 -08:00
doctorpangloss
e3ee2418bf
Fix OpenAPI schema
2024-02-21 14:14:59 -08:00
doctorpangloss
8549f4682f
Fix OpenAPI schema validation
2024-02-21 14:13:12 -08:00
doctorpangloss
bf6d91fec0
Merge upstream
2024-02-21 13:33:05 -08:00
doctorpangloss
5cd06a727f
Fix relative imports
2024-02-21 13:31:55 -08:00
doctorpangloss
f419c5760d
Adding __init__.py
2024-02-20 14:24:51 -08:00
comfyanonymous
18c151b3e3
Add some latent2rgb matrices for previews.
2024-02-20 10:57:24 -05:00
comfyanonymous
0d0fbabd1d
Pass pooled CLIP to stage b.
2024-02-20 04:24:45 -05:00
comfyanonymous
c6b7a157ed
Align simple scheduling closer to official stable cascade scheduler.
2024-02-20 04:24:39 -05:00
doctorpangloss
7520691021
Merge with master
2024-02-19 10:55:22 -08:00
comfyanonymous
88f300401c
Enable fp16 by default on mps.
2024-02-19 12:00:48 -05:00
comfyanonymous
e93cdd0ad0
Remove print.
2024-02-19 11:47:26 -05:00
comfyanonymous
3711b31dff
Support Stable Cascade in checkpoint format.
2024-02-19 11:20:48 -05:00
comfyanonymous
d91f45ef28
Some cleanups to how the text encoders are loaded.
2024-02-19 10:46:30 -05:00
comfyanonymous
a7b5eaa7e3
Forgot to commit this.
2024-02-19 04:25:46 -05:00
comfyanonymous
3b2e579926
Support loading the Stable Cascade effnet and previewer as a VAE.
...
The effnet can be used to encode images for img2img with Stage C.
2024-02-19 04:10:01 -05:00
comfyanonymous
dccca1daa5
Fix gligen lowvram mode.
2024-02-18 02:20:23 -05:00
doctorpangloss
ab56eadcc8
Fix missing path arguments
2024-02-17 23:19:21 -08:00
Jacob Segal
508d286b8f
Fix Pyright warnings
2024-02-17 21:56:46 -08:00
comfyanonymous
8b60d33bb7
Add ModelSamplingStableCascade to control the shift sampling parameter.
...
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
2024-02-18 00:55:23 -05:00
Jacob Segal
9c1e3f7b98
Fix an overly aggressive assertion.
...
This could happen when attempting to evaluate `IS_CHANGED` for a node
during the creation of the cache (in order to create the cache key).
2024-02-17 21:02:59 -08:00
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
comfyanonymous
11e3221f1f
fp8 weight support for Stable Cascade.
2024-02-17 15:27:31 -05:00
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
comfyanonymous
5b40e7a5ed
Implement shift schedule for cascade stage C.
2024-02-17 11:38:47 -05:00
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
2024-02-17 09:01:17 -05:00
comfyanonymous
6c875d846b
Fix clip attention mask issues on some hardware.
2024-02-17 07:53:52 -05:00
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
2024-02-17 00:42:30 -05:00
comfyanonymous
f2d1d16f4f
Support Stable Cascade Stage B lite.
2024-02-16 23:41:23 -05:00
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
2024-02-16 23:01:54 -05:00
comfyanonymous
97d03ae04a
StableCascade CLIP model support.
2024-02-16 13:29:04 -05:00
comfyanonymous
667c92814e
Stable Cascade Stage B.
2024-02-16 13:02:03 -05:00
doctorpangloss
955fcb8bb0
Fix API
2024-02-16 08:47:11 -08:00
comfyanonymous
f83109f09b
Stable Cascade Stage C.
2024-02-16 10:55:08 -05:00
doctorpangloss
d29028dd3e
Fix distributed prompting
2024-02-16 06:17:06 -08:00
comfyanonymous
5e06baf112
Stable Cascade Stage A.
2024-02-16 06:30:39 -05:00
doctorpangloss
0546b01080
Tweak RPC parameters
2024-02-15 23:43:14 -08:00
doctorpangloss
a62f2c00ed
Fix RPC issue in frontend
2024-02-15 21:54:13 -08:00
doctorpangloss
cd54bf3b3d
Tweak worker RPC creation
2024-02-15 20:28:11 -08:00
doctorpangloss
a5d51c1ae2
Tweak API
2024-02-15 19:08:29 -08:00
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
2024-02-15 21:10:10 -05:00
doctorpangloss
06e74226df
Add external address parameter
2024-02-15 17:39:15 -08:00
doctorpangloss
7c6b8ecb02
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-15 12:56:07 -08:00
Jacob Segal
12627ca75a
Add a command-line argument to enable variants
...
This allows the use of nodes that have sockets of type '*' without
applying a patch to the code.
2024-02-14 21:06:53 -08:00
Jacob Segal
e4e20d79b2
Allow input_info to be of type None
2024-02-14 21:04:50 -08:00
comfyanonymous
38b7ac6e26
Don't init the CLIP model when the checkpoint has no CLIP weights.
2024-02-13 00:01:08 -05:00
doctorpangloss
8b4e5cee61
Fix _model_patcher typo
2024-02-12 15:12:49 -08:00
doctorpangloss
b4eda2d5a4
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-12 14:24:20 -08:00
comfyanonymous
7dd352cbd7
Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI
2024-02-11 12:23:30 -05:00
Jedrzej Kosinski
f44225fd5f
Fix infinite while loop being possible in ddim_scheduler
2024-02-09 17:11:34 -06:00
doctorpangloss
15ff903b35
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-09 12:19:00 -08:00
comfyanonymous
25a4805e51
Add a way to set different conditioning for the controlnet.
2024-02-09 14:13:31 -05:00
doctorpangloss
f195230e2a
More tweaks to cli args
2024-02-09 01:40:27 -08:00
doctorpangloss
a3f9d007d4
Fix CLI args issues
2024-02-09 01:20:57 -08:00
doctorpangloss
bdc843ced1
Tweak this message
2024-02-08 22:56:06 -08:00
doctorpangloss
b3f1ce7ef0
Update comment
2024-02-08 21:15:58 -08:00
doctorpangloss
d5bcaa515c
Specify worker,frontend as default roles
2024-02-08 20:37:16 -08:00
doctorpangloss
54d419d855
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-08 20:31:05 -08:00
doctorpangloss
80f8c40248
Distributed queueing with amqp-compatible servers like RabbitMQ.
...
- Binary previews are not yet supported
- Use `--distributed-queue-connection-uri=amqp://guest:guest@rabbitmqserver/`
- Roles supported: frontend, worker or both (see `--help`)
- Run `comfy-worker` for a lightweight worker you can wrap your head
around
- Workers and frontends must have the same directory structure (set
with `--cwd`) and supported nodes. Frontends must still have access
to inputs and outputs.
- Configuration notes:
distributed_queue_connection_uri (Optional[str]): Servers and clients will connect to this AMQP URL to form a distributed queue and exchange prompt execution requests and progress updates.
distributed_queue_roles (List[str]): Specifies one or more roles for the distributed queue. Acceptable values are "worker" or "frontend", or both by writing the flag twice with each role. Frontends will start the web UI and connect to the provided AMQP URL to submit prompts; workers will pull requests off the AMQP URL.
distributed_queue_name (str): This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name, followed by a '.', then the user ID.
2024-02-08 20:24:27 -08:00
doctorpangloss
0673262940
Fix entrypoints, add comfyui-worker entrypoint
2024-02-08 19:08:42 -08:00
doctorpangloss
72e92514a4
Better compatibility with pre-existing prompt_worker method
2024-02-08 18:07:37 -08:00
doctorpangloss
92898b8c9d
Improved support for distributed queues
2024-02-08 14:55:07 -08:00
doctorpangloss
3367362cec
Fix directml again now that I understand what the command line is doing
2024-02-08 10:17:49 -08:00
doctorpangloss
09838ed604
Update readme, remove unused import
2024-02-08 10:09:47 -08:00
doctorpangloss
04ce040d28
Fix commonpath / using arg.cwd on Windows
2024-02-08 09:30:16 -08:00
Benjamin Berman
8508a5a853
Fix args.directml is not None error
2024-02-08 08:40:13 -08:00
Benjamin Berman
b8fc850b47
Correctly preserves your installed torch when installed like pip install --no-build-isolation git+ https://github.com/hiddenswitch/ComfyUI.git
2024-02-08 08:36:05 -08:00
blepping
a352c021ec
Allow custom samplers to request discard penultimate sigma
2024-02-08 02:24:23 -07:00
Benjamin Berman
e45433755e
Include missing seed parameter in sample workflow
2024-02-07 22:18:46 -08:00
doctorpangloss
123c512a84
Fix compatibility with Python 3.9, 3.10, fix Configuration class declaration issue
2024-02-07 21:52:20 -08:00
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
2024-02-07 18:52:51 -05:00
doctorpangloss
25c28867d2
Update script examples
2024-02-07 15:52:26 -08:00
doctorpangloss
d9b4607c36
Add locks to model_management to prevent multiple copies of the models from being loaded at the same time
2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843
Merge with upstream
2024-02-07 14:27:50 -08:00
doctorpangloss
1b2ea61345
Improved API support
...
- Run comfyui workflows directly inside other python applications using
EmbeddedComfyClient.
- Optional telemetry in prompts and models using anonymity preserving
Plausible self-hosted or hosted.
- Better OpenAPI schema
- Basic support for distributed ComfyUI backends. Limitations: no
progress reporting, no easy way to start your own distributed
backend, requires RabbitMQ as a message broker.
2024-02-07 14:20:21 -08:00
comfyanonymous
236bda2683
Make minimum tile size the size of the overlap.
2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2
Put VAE key name in model config.
2024-01-30 02:24:38 -05:00
doctorpangloss
32d83e52ff
Fix CheckpointLoader even though it is deprecated
2024-01-29 17:20:10 -08:00
doctorpangloss
0d2cc553bc
Revert "Remove unused configs contents"
...
This reverts commit 65549c39f1 .
2024-01-29 17:03:27 -08:00
doctorpangloss
2400da51e5
PyInstaller
2024-01-29 17:02:45 -08:00
doctorpangloss
82edb2ff0e
Merge with latest upstream.
2024-01-29 15:06:31 -08:00
doctorpangloss
65549c39f1
Remove unused configs contents
2024-01-29 14:14:46 -08:00
Jacob Segal
36b2214e30
Execution Model Inversion
...
This PR inverts the execution model -- from recursively calling nodes to
using a topological sort of the nodes. This change allows for
modification of the node graph during execution. This allows for two
major advantages:
1. The implementation of lazy evaluation in nodes. For example, if a
"Mix Images" node has a mix factor of exactly 0.0, the second image
input doesn't even need to be evaluated (and visa-versa if the mix
factor is 1.0).
2. Dynamic expansion of nodes. This allows for the creation of dynamic
"node groups". Specifically, custom nodes can return subgraphs that
replace the original node in the graph. This is an incredibly
powerful concept. Using this functionality, it was easy to
implement:
a. Components (a.k.a. node groups)
b. Flow control (i.e. while loops) via tail recursion
c. All-in-one nodes that replicate the WebUI functionality
d. and more
All of those were able to be implemented entirely via custom nodes,
so those features are *not* a part of this PR. (There are some
front-end changes that should occur before that functionality is
made widely available, particularly around variant sockets.)
The custom nodes associated with this PR can be found at:
https://github.com/BadCafeCode/execution-inversion-demo-comfyui
Note that some of them require that variant socket types ("*") be
enabled.
2024-01-28 20:48:42 -08:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler ( #2619 )
...
self.scheduler -> scheduler_name
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458
Cleanup some unused imports.
2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87
Remove useless import.
2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea
Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
...
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c
Make server storage the default.
...
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6
Don't round noise mask.
2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd
Add InpaintModelConditioning node.
...
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.
This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30
Fix issue when using multiple t2i adapters with batched images.
2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support ( #2160 )
...
* wip per user data
* Rename, hide menu
* better error
rework default user
* store pretty
* Add userdata endpoints
Change nodetemplates to userdata
* add multi user message
* make normal arg
* Fix tests
* Ignore user dir
* user tests
* Changed to default to browser storage and add server-storage arg
* fix crash on empty templates
* fix settings added before load
* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
doctorpangloss
42232f4d20
fix module ref
2024-01-03 20:12:33 -08:00
doctorpangloss
345825dfb5
Fix issues with missing __init__ in upscaler, move web/ directory to comfy/web so that the need for symbolic link support on windows is eliminated
2024-01-03 16:35:00 -08:00
doctorpangloss
d31298ac60
gligen is missing math import
2024-01-03 16:01:44 -08:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb
Fix model patches not working in custom sampling scheduler nodes.
2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b
Add support for the stable diffusion x4 upscaling model.
...
This is an old model.
Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b
Fix regression.
2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c
Refactor VAE code.
...
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90
Auto detect out_channels from model.
2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
2023-12-30 05:49:07 -05:00
comfyanonymous
12e822c6c8
Use function to calculate model size in model patcher.
2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
2023-12-28 21:41:10 -05:00
comfyanonymous
c782144433
Fix clip vision lowvram mode not working.
2023-12-27 13:50:57 -05:00
comfyanonymous
f21bb41787
Fix taesd VAE in lowvram mode.
2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
9a7619b72d
Fix regression with inpaint model.
2023-12-19 02:32:59 -05:00
comfyanonymous
571ea8cdcc
Fix SAG not working with cfg 1.0
2023-12-18 17:03:32 -05:00
comfyanonymous
8cf1daa108
Fix SDXL area composition sometimes not using the right pooled output.
2023-12-18 12:54:23 -05:00
comfyanonymous
2258f85159
Support stable zero 123 model.
...
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
2023-12-18 03:48:04 -05:00
comfyanonymous
2f9d6a97ec
Add --deterministic option to make pytorch use deterministic algorithms.
2023-12-17 16:59:21 -05:00
comfyanonymous
e45d920ae3
Don't resize clip vision image when the size is already good.
2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e
Switch clip vision to manual cast.
...
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f
Set clip vision model in eval mode so it works without inference mode.
2023-12-15 18:53:08 -05:00
Hari
574363a8a6
Implement Perp-Neg
2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993
Improve code legibility.
2023-12-14 11:41:49 -05:00
comfyanonymous
6c5990f7db
Fix cfg being calculated more than once if sampler_cfg_function.
2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10
Refactor and improve the sag node.
...
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance ( #2201 )
...
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9
Support segmind vega model.
2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5
Add dtype parameter to VAE object.
2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769
Add manual cast to controlnet.
2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc
Use inference dtype for unet memory usage estimation.
2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8
Disable non blocking on mps.
2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f
Implement GLora.
2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4
Make lora code a bit cleaner.
2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38
Cleanup.
2023-12-08 16:02:08 -05:00
comfyanonymous
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
2023-12-08 02:49:30 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
efb704c758
Support attention masking in CLIP implementation.
2023-12-07 02:51:02 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
doctorpangloss
3fd5de9784
fix nodes
2023-12-06 17:30:33 -08:00
comfyanonymous
2db86b4676
Slightly faster lora applying.
2023-12-06 05:13:14 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
9b655d4fd7
Fix memory issue with control loras.
2023-12-04 21:55:19 -05:00
comfyanonymous
26b1c0a771
Fix control lora on fp8.
2023-12-04 13:47:41 -05:00
comfyanonymous
be3468ddd5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
comfyanonymous
ca82ade765
Use .itemsize to get dtype size for fp8.
2023-12-04 11:52:06 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
Benjamin Berman
01312a55a4
merge upstream
2023-12-03 20:41:13 -08:00
comfyanonymous
61a123a1e0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
c97be4db91
Support SD2.1 turbo checkpoint.
2023-11-30 19:27:03 -05:00
comfyanonymous
983ebc5792
Use smart model management for VAE to decrease latency.
2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67
Add a function to load a unet from a state dict.
2023-11-27 17:41:29 -05:00
comfyanonymous
f30b992b18
.sigma and .timestep now return tensors on the same device as the input.
2023-11-27 16:41:33 -05:00
comfyanonymous
13fdee6abf
Try to free memory for both cond+uncond before inference.
2023-11-27 14:55:40 -05:00
comfyanonymous
be71bb5e13
Tweak memory inference calculations a bit.
2023-11-27 14:04:16 -05:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
5d6dfce548
Fix importing diffusers unets.
2023-11-24 20:35:29 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771
Make VAE memory estimation take dtype into account.
2023-11-22 18:17:19 -05:00
comfyanonymous
32447f0c39
Add sampling_settings so models can specify specific sampling settings.
2023-11-22 17:24:00 -05:00
comfyanonymous
c3ae99a749
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
2023-11-22 03:23:16 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
6a491ebe27
Allow model config to preprocess the vae state dict on load.
2023-11-21 16:29:18 -05:00
comfyanonymous
cd4fc77d5f
Add taesd and taesdxl to VAELoader node.
...
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
ce67dcbcda
Make it easy for models to process the unet state dict on load.
2023-11-20 23:17:53 -05:00
comfyanonymous
d9d8702d8d
percent_to_sigma now returns a float instead of a tensor.
2023-11-18 23:20:29 -05:00
comfyanonymous
0cf4e86939
Add some command line arguments to store text encoder weights in fp8.
...
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
107e78b1cb
Add support for loading SSD1B diffusers unet version.
...
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
9f00a18095
Fix potential issues.
2023-11-16 14:59:54 -05:00
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
dcec1047e6
Invert the start and end percentages in the code.
...
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.
percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.
Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
57eea0efbb
heunpp2 sampler.
2023-11-14 23:50:55 -05:00
comfyanonymous
728613bb3e
Fix last pr.
2023-11-14 14:41:31 -05:00
comfyanonymous
ec3d0ab432
Merge branch 'master' of https://github.com/Jannchie/ComfyUI
2023-11-14 14:38:07 -05:00
comfyanonymous
c962884a5c
Make bislerp work on GPU.
2023-11-14 11:38:36 -05:00
comfyanonymous
420beeeb05
Clean up and refactor sampler code.
...
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
f2e49b1d57
fix: adaptation to older versions of pytroch
2023-11-14 14:32:05 +09:00
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
7339479b10
Disable xformers when it can't load properly.
2023-11-13 12:31:10 -05:00
comfyanonymous
4781819a85
Make memory estimation aware of model dtype.
2023-11-12 04:28:26 -05:00
comfyanonymous
dd4ba68b6e
Allow different models to estimate memory usage differently.
2023-11-12 04:03:52 -05:00
comfyanonymous
2c9dba8dc0
sampling_function now has the model object as the argument.
2023-11-12 03:45:10 -05:00
comfyanonymous
8d80584f6a
Remove useless argument from uni_pc sampler.
2023-11-12 01:25:33 -05:00
comfyanonymous
248aa3e563
Fix bug.
2023-11-11 12:20:16 -05:00
comfyanonymous
4a8a839b40
Add option to use in place weight updating in ModelPatcher.
2023-11-11 01:11:12 -05:00
comfyanonymous
412d3ff57d
Refactor.
2023-11-11 01:11:06 -05:00
comfyanonymous
58d5d71a93
Working RescaleCFG node.
...
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
2023-11-10 20:52:10 -05:00
comfyanonymous
3e0033ef30
Fix model merge bug.
...
Unload models before getting weights for model patching.
2023-11-10 03:19:05 -05:00
comfyanonymous
002aefa382
Support lcm models.
...
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
2023-11-09 18:30:22 -05:00
comfyanonymous
ec12000136
Add support for full diff lora keys.
2023-11-08 22:05:31 -05:00
comfyanonymous
064d7583eb
Add a CONDConstant for passing non tensor conds to unet.
2023-11-08 01:59:09 -05:00
comfyanonymous
794dd2064d
Fix typo.
2023-11-07 23:41:55 -05:00
comfyanonymous
0a6fd49a3e
Print leftover keys when using the UNETLoader.
2023-11-07 22:15:55 -05:00
comfyanonymous
fe40109b57
Fix issue with object patches not being copied with patcher.
2023-11-07 22:15:15 -05:00
comfyanonymous
a527d0c795
Code refactor.
2023-11-07 19:33:40 -05:00
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
2023-11-07 04:30:37 -05:00
comfyanonymous
844dbf97a7
Add: advanced->model->ModelSamplingDiscrete node.
...
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
2023-11-07 03:28:53 -05:00
comfyanonymous
656c0b5d90
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
2023-11-06 14:17:41 -05:00
comfyanonymous
b3fcd64c6c
Make SDTokenizer class work with more types of tokenizers.
2023-11-06 01:09:18 -05:00
gameltb
7e455adc07
fix unet_wrapper_function name in ModelPatcher
2023-11-05 17:11:44 +08:00
comfyanonymous
1ffa8858e7
Move model sampling code to comfy/model_sampling.py
2023-11-04 01:32:23 -04:00
comfyanonymous
ae2acfc21b
Don't convert Nan to zero.
...
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
2023-11-03 13:13:15 -04:00
comfyanonymous
d2e27b48f1
sampler_cfg_function now gets the noisy output as argument again.
...
This should make things that use sampler_cfg_function behave like before.
Added an input argument for those that want the denoised output.
This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
2023-11-01 21:24:08 -04:00
comfyanonymous
2455aaed8a
Allow model or clip to be None in load_lora_for_models.
2023-11-01 20:27:20 -04:00
comfyanonymous
ecb80abb58
Allow ModelSamplingDiscrete to be instantiated without a model config.
2023-11-01 19:13:03 -04:00
comfyanonymous
e73ec8c4da
Not used anymore.
2023-11-01 00:01:30 -04:00
comfyanonymous
111f1b5255
Fix some issues with sampling precision.
2023-10-31 23:49:29 -04:00
comfyanonymous
7c0f255de1
Clean up percent start/end and make controlnets work with sigmas.
2023-10-31 22:14:32 -04:00
comfyanonymous
a268a574fa
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
2023-10-31 18:11:29 -04:00
comfyanonymous
1777b54d02
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
2023-10-31 17:33:43 -04:00
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
comfyanonymous
a12cc05323
Add --max-upload-size argument, the default is 100MB.
2023-10-29 03:55:46 -04:00
comfyanonymous
2a134bfab9
Fix checkpoint loader with config.
2023-10-27 22:13:55 -04:00
comfyanonymous
e60ca6929a
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
2023-10-27 15:54:04 -04:00
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
2023-10-27 14:45:15 -04:00
comfyanonymous
434ce25ec0
Restrict loading embeddings from embedding folders.
2023-10-27 02:54:13 -04:00
comfyanonymous
723847f6b3
Faster clip image processing.
2023-10-26 01:53:01 -04:00
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
comfyanonymous
7fbb217d3a
Fix uni_pc returning noisy image when steps <= 3
2023-10-25 16:08:30 -04:00
Jedrzej Kosinski
3783cb8bfd
change 'c_adm' to 'y' in ControlNet.get_control
2023-10-25 08:24:32 -05:00
comfyanonymous
d1d2fea806
Pass extra conds directly to unet.
2023-10-25 00:07:53 -04:00
comfyanonymous
036f88c621
Refactor to make it easier to add custom conds to models.
2023-10-24 23:31:12 -04:00
comfyanonymous
3fce8881ca
Sampling code refactor to make it easier to add more conds.
2023-10-24 03:38:41 -04:00
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
2023-10-22 13:58:12 -04:00
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
comfyanonymous
a0690f9df9
Fix t2i adapter issue.
2023-10-21 20:31:24 -04:00
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
comfyanonymous
4185324a1d
Fix uni_pc sampler math. This changes the images this sampler produces.
2023-10-20 04:16:53 -04:00
comfyanonymous
e6962120c6
Make sure cond_concat is on the right device.
2023-10-19 01:14:25 -04:00
comfyanonymous
45c972aba8
Refactor cond_concat into conditioning.
2023-10-18 20:36:58 -04:00
comfyanonymous
430a8334c5
Fix some potential issues.
2023-10-18 19:48:36 -04:00
comfyanonymous
782a24fce6
Refactor cond_concat into model object.
2023-10-18 16:48:37 -04:00
comfyanonymous
0d45a565da
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
2023-10-18 02:43:01 -04:00
doctorpangloss
04d0ecd0d4
merge upstream
2023-10-17 17:49:31 -07:00
doctorpangloss
53261eb26b
Fix another relative path
2023-10-17 16:15:42 -07:00
Benjamin Berman
d21655b5a2
merge upstream
2023-10-17 14:47:59 -07:00
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
2023-10-17 15:18:51 -04:00
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
2023-10-17 03:19:29 -04:00
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
2023-10-16 16:48:46 -04:00
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
2023-10-13 14:51:10 -04:00
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
2023-10-13 14:41:17 -04:00
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
2023-10-11 21:30:57 -04:00
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
2023-10-11 20:38:48 -04:00
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
comfyanonymous
8cc75c64ff
Let unet wrapper functions have .to attributes.
2023-10-11 01:34:38 -04:00
comfyanonymous
5e885bd9c8
Cleanup.
2023-10-10 21:46:53 -04:00
comfyanonymous
851bb87ca9
Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI
2023-10-10 21:42:35 -04:00
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format
2023-10-10 13:21:44 +09:00
comfyanonymous
d1a0abd40b
Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI
2023-10-09 01:53:29 -04:00
doctorpangloss
e8b60dfc6e
merge upstream
2023-10-06 15:02:31 -07:00
comfyanonymous
72188dffc3
load_checkpoint_guess_config can now optionally output the model.
2023-10-06 13:48:18 -04:00
Jairo Correa
63e5fd1790
Option to input directory
2023-10-04 19:45:15 -03:00
City
9bfec2bdbf
Fix quality loss due to low precision
2023-10-04 15:40:59 +02:00
badayvedat
0f17993d05
fix: typo in extra sampler
2023-09-29 06:09:59 +03:00
comfyanonymous
66756de100
Add SamplerDPMPP_2M_SDE node.
2023-09-28 21:56:23 -04:00
comfyanonymous
71713888c4
Print missing VAE keys.
2023-09-28 00:54:57 -04:00
comfyanonymous
d234ca558a
Add missing samplers to KSamplerSelect.
2023-09-28 00:17:03 -04:00
comfyanonymous
1adcc4c3a2
Add a SamplerCustom Node.
...
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.
2023-09-27 22:21:18 -04:00
comfyanonymous
bf3fc2f1b7
Refactor sampling related code.
2023-09-27 16:45:22 -04:00
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
comfyanonymous
1d6dd83184
Scheduler code refactor.
2023-09-26 17:07:07 -04:00
comfyanonymous
446caf711c
Sampling code refactor.
2023-09-26 13:45:15 -04:00
comfyanonymous
76cdc809bf
Support more controlnet models.
2023-09-23 18:47:46 -04:00
comfyanonymous
ae87543653
Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI
2023-09-23 00:57:17 -04:00
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
2023-09-22 21:11:27 -07:00
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
2023-09-22 20:26:47 -04:00
comfyanonymous
492db2de8d
Allow having a different pooled output for each image in a batch.
2023-09-21 01:14:42 -04:00
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
2023-09-20 17:52:41 -04:00
comfyanonymous
7c9a92f552
Don't depend on torchvision.
2023-09-19 13:12:47 -04:00
MoonRide303
2b6b178173
Added support for lanczos scaling
2023-09-19 10:40:38 +02:00
comfyanonymous
b92bf8196e
Do lora cast on GPU instead of CPU for higher performance.
2023-09-18 23:04:49 -04:00
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
2023-09-17 04:09:19 -04:00
comfyanonymous
61b1f67734
Support models without previews.
2023-09-16 12:59:54 -04:00
comfyanonymous
43d4935a1d
Add cond_or_uncond array to transformer_options so hooks can check what is
...
cond and what is uncond.
2023-09-15 22:21:14 -04:00
comfyanonymous
415abb275f
Add DDPM sampler.
2023-09-15 19:22:47 -04:00
comfyanonymous
94e4fe39d8
This isn't used anywhere.
2023-09-15 12:03:03 -04:00
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
2023-09-15 02:02:05 -04:00
comfyanonymous
0d8f376446
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
2023-09-14 20:28:22 -04:00
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
2023-09-14 12:16:07 -04:00
comfyanonymous
3039b08eb1
Only parse command line args when main.py is called.
2023-09-13 11:38:20 -04:00
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
2023-09-12 15:09:10 -04:00
comfyanonymous
fb3b728203
Fix issue where autocast fp32 CLIP gave different results from regular.
2023-09-11 21:49:56 -04:00
comfyanonymous
7d401ed1d0
Add ldm format support to UNETLoader.
2023-09-11 16:36:50 -04:00
comfyanonymous
e85be36bd2
Add a penultimate_hidden_states to the clip vision output.
2023-09-08 14:06:58 -04:00
comfyanonymous
1e6b67101c
Support diffusers format t2i adapters.
2023-09-08 11:36:51 -04:00
comfyanonymous
326577d04c
Allow cancelling of everything with a progress bar.
2023-09-07 23:37:03 -04:00
comfyanonymous
f88f7f413a
Add a ConditioningSetAreaPercentage node.
2023-09-06 03:28:27 -04:00
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
comfyanonymous
7746bdf7b0
Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI
2023-09-04 00:43:11 -04:00
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
2023-09-02 20:07:52 -07:00
comfyanonymous
a74c5dbf37
Move some functions to utils.py
2023-09-02 22:33:37 -04:00
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
comfyanonymous
77a176f9e0
Use common function to reshape batch to.
2023-09-02 03:42:49 -04:00
comfyanonymous
7931ff0fd9
Support SDXL inpaint models.
2023-09-01 15:22:52 -04:00
comfyanonymous
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
comfyanonymous
5c363a9d86
Fix controlnet bug.
2023-09-01 02:01:08 -04:00
comfyanonymous
cfe1c54de8
Fix controlnet issue.
2023-08-31 15:16:58 -04:00
comfyanonymous
1c012d69af
It doesn't make sense for c_crossattn and c_concat to be lists.
2023-08-31 13:25:00 -04:00
comfyanonymous
7e941f9f24
Clean up DiffusersLoader node.
2023-08-30 12:57:07 -04:00
Simon Lui
18617967e5
Fix error message in model_patcher.py
...
Found while tinkering.
2023-08-30 00:25:04 -07:00
comfyanonymous
fe4c07400c
Fix "Load Checkpoint with config" node.
2023-08-29 23:58:32 -04:00
comfyanonymous
f2f5e5dcbb
Support SDXL t2i adapters with 3 channel input.
2023-08-29 16:44:57 -04:00
doctorpangloss
db673f7728
merge upstream
2023-08-29 13:36:53 -07:00
comfyanonymous
15adc3699f
Move beta_schedule to model_config and allow disabling unet creation.
2023-08-29 14:22:53 -04:00
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2023-08-29 11:21:36 -04:00
comfyanonymous
65cae62c71
No need to check filename extensions to detect shuffle controlnet.
2023-08-28 16:49:06 -04:00
comfyanonymous
4e89b2c25a
Put clip vision outputs on the CPU.
2023-08-28 16:26:11 -04:00
comfyanonymous
a094b45c93
Load clipvision model to GPU for faster performance.
2023-08-28 15:29:27 -04:00
comfyanonymous
1300a1bb4c
Text encoder should initially load on the offload_device not the regular.
2023-08-28 15:08:45 -04:00
comfyanonymous
f92074b84f
Move ModelPatcher to model_patcher.py
2023-08-28 14:51:31 -04:00
comfyanonymous
4798cf5a62
Implement loras with norm keys.
2023-08-28 11:20:06 -04:00
comfyanonymous
b8c7c770d3
Enable bf16-vae by default on ampere and up.
2023-08-27 23:06:19 -04:00
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2023-08-27 22:24:42 -04:00
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2023-08-27 21:33:53 -04:00
comfyanonymous
a57b0c797b
Fix lowvram model merging.
2023-08-26 11:52:07 -04:00
comfyanonymous
f72780a7e3
The new smart memory management makes this unnecessary.
2023-08-25 18:02:15 -04:00
comfyanonymous
c77f02e1c6
Move controlnet code to comfy/controlnet.py
2023-08-25 17:33:04 -04:00
comfyanonymous
15a7716fa6
Move lora code to comfy/lora.py
2023-08-25 17:11:51 -04:00
doctorpangloss
3a56b15bc2
use comfyui.custom_nodes as the plugin entrypoint and fix the protocol
2023-08-24 22:23:12 -07:00
comfyanonymous
ec96f6d03a
Move text_projection to base clip model.
2023-08-24 23:43:48 -04:00
comfyanonymous
30eb92c3cb
Code cleanups.
2023-08-24 19:39:18 -04:00
comfyanonymous
51dde87e97
Try to free enough vram for control lora inference.
2023-08-24 17:20:54 -04:00
comfyanonymous
e3d0a9a490
Fix potential issue with text projection matrix multiplication.
2023-08-24 00:54:16 -04:00
comfyanonymous
cc44ade79e
Always shift text encoder to GPU when the device supports fp16.
2023-08-23 21:45:00 -04:00
comfyanonymous
a6ef08a46a
Even with forced fp16 the cpu device should never use it.
2023-08-23 21:38:28 -04:00
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2023-08-23 21:01:15 -04:00
Benjamin Berman
e9365c4678
wip
2023-08-23 16:14:12 -07:00
comfyanonymous
f081017c1a
Save memory by storing text encoder weights in fp16 in most situations.
...
Do inference in fp32 to make sure quality stays the exact same.
2023-08-23 01:08:51 -04:00
Benjamin Berman
b98e1c0f01
move this back since the issue is resolved
2023-08-22 18:00:18 -07:00
comfyanonymous
afcb9cb1df
All resolutions now work with t2i adapter for SDXL.
2023-08-22 16:23:54 -04:00
Benjamin Berman
cc215a0658
Merge branch 'master' of https://github.com/comfyanonymous/ComfyUI
2023-08-22 12:16:59 -07:00
Benjamin Berman
2ed2f86117
comfy_extras should not be its own toplevel package
2023-08-22 12:15:57 -07:00
comfyanonymous
85fde89d7f
T2I adapter SDXL.
2023-08-22 14:40:43 -04:00
Benjamin Berman
d25d5e75f0
wip make package structure coherent
2023-08-22 11:35:20 -07:00
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2023-08-22 01:06:26 -04:00
comfyanonymous
763b0cf024
Fix control lora not working in fp32.
2023-08-21 20:38:31 -04:00
doombubbles
bc4e52f790
Hash consistency fixes
2023-08-21 13:28:25 -07:00
Benjamin Berman
30ed9fc143
merge with upstream
2023-08-21 12:00:59 -07:00
Benjamin Berman
cff13ace64
node tweaks
2023-08-21 11:44:51 -07:00
comfyanonymous
199d73364a
Fix ControlLora on lowvram.
2023-08-21 00:54:04 -04:00
comfyanonymous
d08e53de2e
Remove autocast from controlnet code.
2023-08-20 21:47:32 -04:00
comfyanonymous
0d7b0a4dc7
Small cleanups.
2023-08-20 14:56:47 -04:00
Simon Lui
9225465975
Further tuning and fix mem_free_total.
2023-08-20 14:19:53 -04:00
Simon Lui
2c096e4260
Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes.
2023-08-20 14:19:51 -04:00
comfyanonymous
e9469e732d
--disable-smart-memory now disables loading model directly to vram.
2023-08-20 04:00:53 -04:00
comfyanonymous
c9b562aed1
Free more memory before VAE encode/decode.
2023-08-19 12:13:13 -04:00
comfyanonymous
b80c3276dc
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00
comfyanonymous
39ac856a33
ReVision support: unclip nodes can now be used with SDXL.
2023-08-18 11:59:36 -04:00
comfyanonymous
76d53c4622
Add support for clip g vision model to CLIPVisionLoader.
2023-08-18 11:13:29 -04:00
Alexopus
e59fe0537a
Fix referenced before assignment
...
For https://github.com/BlenderNeko/ComfyUI_TiledKSampler/issues/13
2023-08-17 22:30:07 +02:00
comfyanonymous
be9c5e25bc
Fix issue with not freeing enough memory when sampling.
2023-08-17 15:59:56 -04:00
comfyanonymous
ac0758a1a4
Fix bug with lowvram and controlnet advanced node.
2023-08-17 13:38:51 -04:00
comfyanonymous
c28db1f315
Fix potential issues with patching models when saving checkpoints.
2023-08-17 11:07:08 -04:00
comfyanonymous
3aee33b54e
Add --disable-smart-memory for those that want the old behaviour.
2023-08-17 03:12:37 -04:00
comfyanonymous
2be2742711
Fix issue with regular torch version.
2023-08-17 01:58:54 -04:00
comfyanonymous
89a0767abf
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
comfyanonymous
2c97c30256
Support small diffusers controlnet so both types are now supported.
2023-08-16 12:45:56 -04:00
comfyanonymous
53f326a3d8
Support diffusers mini controlnets.
2023-08-16 12:28:01 -04:00
comfyanonymous
58f0c616ed
Fix clip vision issue with old transformers versions.
2023-08-16 11:36:22 -04:00
comfyanonymous
ae270f79bc
Fix potential issue with batch size and clip vision.
2023-08-16 11:05:11 -04:00
Benjamin Berman
0fb47ea4f5
Merge branch 'installable' into spellsource
2023-08-15 17:40:48 -07:00
Benjamin Berman
497c01fed1
fix history
2023-08-15 17:36:56 -07:00
Benjamin Berman
625134b6a1
merge with upstream
2023-08-15 17:13:55 -07:00
comfyanonymous
a2ce9655ca
Refactor unclip code.
2023-08-14 23:48:47 -04:00
comfyanonymous
9cc12c833d
CLIPVisionEncode can now encode multiple images.
2023-08-14 16:54:05 -04:00
comfyanonymous
0cb6dac943
Remove 3m from PR #1213 because of some small issues.
2023-08-14 00:48:45 -04:00
comfyanonymous
e244b2df83
Add sgm_uniform scheduler that acts like the default one in sgm.
2023-08-14 00:29:03 -04:00
comfyanonymous
58c7da3665
Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras.
2023-08-14 00:28:50 -04:00
comfyanonymous
ba319a34e4
Merge branch 'dpmpp3m' of https://github.com/FizzleDorf/ComfyUI
2023-08-14 00:23:15 -04:00
FizzleDorf
3cfad03a68
dpmpp 3m + dpmpp 3m sde added
2023-08-13 22:29:04 -04:00
comfyanonymous
585a062910
Print unet config when model isn't detected.
2023-08-13 01:39:48 -04:00