doctorpangloss
15ff903b35
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-09 12:19:00 -08:00
comfyanonymous
25a4805e51
Add a way to set different conditioning for the controlnet.
2024-02-09 14:13:31 -05:00
doctorpangloss
f195230e2a
More tweaks to cli args
2024-02-09 01:40:27 -08:00
doctorpangloss
a3f9d007d4
Fix CLI args issues
2024-02-09 01:20:57 -08:00
doctorpangloss
bdc843ced1
Tweak this message
2024-02-08 22:56:06 -08:00
doctorpangloss
b3f1ce7ef0
Update comment
2024-02-08 21:15:58 -08:00
doctorpangloss
d5bcaa515c
Specify worker,frontend as default roles
2024-02-08 20:37:16 -08:00
doctorpangloss
54d419d855
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-02-08 20:31:05 -08:00
doctorpangloss
80f8c40248
Distributed queueing with amqp-compatible servers like RabbitMQ.
...
- Binary previews are not yet supported
- Use `--distributed-queue-connection-uri=amqp://guest:guest@rabbitmqserver/`
- Roles supported: frontend, worker or both (see `--help`)
- Run `comfy-worker` for a lightweight worker you can wrap your head
around
- Workers and frontends must have the same directory structure (set
with `--cwd`) and supported nodes. Frontends must still have access
to inputs and outputs.
- Configuration notes:
distributed_queue_connection_uri (Optional[str]): Servers and clients will connect to this AMQP URL to form a distributed queue and exchange prompt execution requests and progress updates.
distributed_queue_roles (List[str]): Specifies one or more roles for the distributed queue. Acceptable values are "worker" or "frontend", or both by writing the flag twice with each role. Frontends will start the web UI and connect to the provided AMQP URL to submit prompts; workers will pull requests off the AMQP URL.
distributed_queue_name (str): This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name, followed by a '.', then the user ID.
2024-02-08 20:24:27 -08:00
doctorpangloss
0673262940
Fix entrypoints, add comfyui-worker entrypoint
2024-02-08 19:08:42 -08:00
doctorpangloss
72e92514a4
Better compatibility with pre-existing prompt_worker method
2024-02-08 18:07:37 -08:00
doctorpangloss
92898b8c9d
Improved support for distributed queues
2024-02-08 14:55:07 -08:00
doctorpangloss
3367362cec
Fix directml again now that I understand what the command line is doing
2024-02-08 10:17:49 -08:00
doctorpangloss
09838ed604
Update readme, remove unused import
2024-02-08 10:09:47 -08:00
doctorpangloss
04ce040d28
Fix commonpath / using arg.cwd on Windows
2024-02-08 09:30:16 -08:00
Benjamin Berman
8508a5a853
Fix args.directml is not None error
2024-02-08 08:40:13 -08:00
Benjamin Berman
b8fc850b47
Correctly preserves your installed torch when installed like pip install --no-build-isolation git+ https://github.com/hiddenswitch/ComfyUI.git
2024-02-08 08:36:05 -08:00
blepping
a352c021ec
Allow custom samplers to request discard penultimate sigma
2024-02-08 02:24:23 -07:00
Benjamin Berman
e45433755e
Include missing seed parameter in sample workflow
2024-02-07 22:18:46 -08:00
doctorpangloss
123c512a84
Fix compatibility with Python 3.9, 3.10, fix Configuration class declaration issue
2024-02-07 21:52:20 -08:00
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
2024-02-07 18:52:51 -05:00
doctorpangloss
25c28867d2
Update script examples
2024-02-07 15:52:26 -08:00
doctorpangloss
d9b4607c36
Add locks to model_management to prevent multiple copies of the models from being loaded at the same time
2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843
Merge with upstream
2024-02-07 14:27:50 -08:00
doctorpangloss
1b2ea61345
Improved API support
...
- Run comfyui workflows directly inside other python applications using
EmbeddedComfyClient.
- Optional telemetry in prompts and models using anonymity preserving
Plausible self-hosted or hosted.
- Better OpenAPI schema
- Basic support for distributed ComfyUI backends. Limitations: no
progress reporting, no easy way to start your own distributed
backend, requires RabbitMQ as a message broker.
2024-02-07 14:20:21 -08:00
comfyanonymous
236bda2683
Make minimum tile size the size of the overlap.
2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2
Put VAE key name in model config.
2024-01-30 02:24:38 -05:00
doctorpangloss
32d83e52ff
Fix CheckpointLoader even though it is deprecated
2024-01-29 17:20:10 -08:00
doctorpangloss
0d2cc553bc
Revert "Remove unused configs contents"
...
This reverts commit 65549c39f1 .
2024-01-29 17:03:27 -08:00
doctorpangloss
2400da51e5
PyInstaller
2024-01-29 17:02:45 -08:00
doctorpangloss
82edb2ff0e
Merge with latest upstream.
2024-01-29 15:06:31 -08:00
doctorpangloss
65549c39f1
Remove unused configs contents
2024-01-29 14:14:46 -08:00
comfyanonymous
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler ( #2619 )
...
self.scheduler -> scheduler_name
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458
Cleanup some unused imports.
2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87
Remove useless import.
2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea
Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
...
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c
Make server storage the default.
...
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6
Don't round noise mask.
2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd
Add InpaintModelConditioning node.
...
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.
This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30
Fix issue when using multiple t2i adapters with batched images.
2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support ( #2160 )
...
* wip per user data
* Rename, hide menu
* better error
rework default user
* store pretty
* Add userdata endpoints
Change nodetemplates to userdata
* add multi user message
* make normal arg
* Fix tests
* Ignore user dir
* user tests
* Changed to default to browser storage and add server-storage arg
* fix crash on empty templates
* fix settings added before load
* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
doctorpangloss
42232f4d20
fix module ref
2024-01-03 20:12:33 -08:00
doctorpangloss
345825dfb5
Fix issues with missing __init__ in upscaler, move web/ directory to comfy/web so that the need for symbolic link support on windows is eliminated
2024-01-03 16:35:00 -08:00
doctorpangloss
d31298ac60
gligen is missing math import
2024-01-03 16:01:44 -08:00
doctorpangloss
369aeb598f
Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node
2024-01-03 16:00:36 -08:00
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb
Fix model patches not working in custom sampling scheduler nodes.
2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b
Add support for the stable diffusion x4 upscaling model.
...
This is an old model.
Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b
Fix regression.
2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c
Refactor VAE code.
...
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90
Auto detect out_channels from model.
2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33
Remove useless code.
2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
2023-12-30 05:49:07 -05:00
comfyanonymous
12e822c6c8
Use function to calculate model size in model patcher.
2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
2023-12-28 21:41:10 -05:00
comfyanonymous
c782144433
Fix clip vision lowvram mode not working.
2023-12-27 13:50:57 -05:00
comfyanonymous
f21bb41787
Fix taesd VAE in lowvram mode.
2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
2023-12-22 04:05:42 -05:00
comfyanonymous
9a7619b72d
Fix regression with inpaint model.
2023-12-19 02:32:59 -05:00
comfyanonymous
571ea8cdcc
Fix SAG not working with cfg 1.0
2023-12-18 17:03:32 -05:00
comfyanonymous
8cf1daa108
Fix SDXL area composition sometimes not using the right pooled output.
2023-12-18 12:54:23 -05:00
comfyanonymous
2258f85159
Support stable zero 123 model.
...
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
2023-12-18 03:48:04 -05:00
comfyanonymous
2f9d6a97ec
Add --deterministic option to make pytorch use deterministic algorithms.
2023-12-17 16:59:21 -05:00
comfyanonymous
e45d920ae3
Don't resize clip vision image when the size is already good.
2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e
Switch clip vision to manual cast.
...
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f
Set clip vision model in eval mode so it works without inference mode.
2023-12-15 18:53:08 -05:00
Hari
574363a8a6
Implement Perp-Neg
2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993
Improve code legibility.
2023-12-14 11:41:49 -05:00
comfyanonymous
6c5990f7db
Fix cfg being calculated more than once if sampler_cfg_function.
2023-12-13 20:28:04 -05:00
comfyanonymous
ba04a87d10
Refactor and improve the sag node.
...
Moved all the sag related code to comfy_extras/nodes_sag.py
2023-12-13 16:11:26 -05:00
Rafie Walker
6761233e9d
Implement Self-Attention Guidance ( #2201 )
...
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
2023-12-13 15:52:11 -05:00
comfyanonymous
b454a67bb9
Support segmind vega model.
2023-12-12 19:09:53 -05:00
comfyanonymous
824e4935f5
Add dtype parameter to VAE object.
2023-12-12 12:03:29 -05:00
comfyanonymous
32b7e7e769
Add manual cast to controlnet.
2023-12-12 11:32:42 -05:00
comfyanonymous
3152023fbc
Use inference dtype for unet memory usage estimation.
2023-12-11 23:50:38 -05:00
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
2023-12-11 18:36:29 -05:00
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
2023-12-11 18:24:44 -05:00
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
comfyanonymous
340177e6e8
Disable non blocking on mps.
2023-12-10 01:30:35 -05:00
comfyanonymous
614b7e731f
Implement GLora.
2023-12-09 18:15:26 -05:00
comfyanonymous
cb63e230b4
Make lora code a bit cleaner.
2023-12-09 14:15:09 -05:00
comfyanonymous
174eba8e95
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
comfyanonymous
97015b6b38
Cleanup.
2023-12-08 16:02:08 -05:00
comfyanonymous
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
2023-12-08 02:49:30 -05:00
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
comfyanonymous
efb704c758
Support attention masking in CLIP implementation.
2023-12-07 02:51:02 -05:00
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
doctorpangloss
3fd5de9784
fix nodes
2023-12-06 17:30:33 -08:00
comfyanonymous
2db86b4676
Slightly faster lora applying.
2023-12-06 05:13:14 -05:00
comfyanonymous
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
comfyanonymous
9b655d4fd7
Fix memory issue with control loras.
2023-12-04 21:55:19 -05:00
comfyanonymous
26b1c0a771
Fix control lora on fp8.
2023-12-04 13:47:41 -05:00
comfyanonymous
be3468ddd5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
comfyanonymous
ca82ade765
Use .itemsize to get dtype size for fp8.
2023-12-04 11:52:06 -05:00
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
Benjamin Berman
01312a55a4
merge upstream
2023-12-03 20:41:13 -08:00
comfyanonymous
61a123a1e0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
comfyanonymous
c97be4db91
Support SD2.1 turbo checkpoint.
2023-11-30 19:27:03 -05:00
comfyanonymous
983ebc5792
Use smart model management for VAE to decrease latency.
2023-11-28 04:58:51 -05:00
comfyanonymous
c45d1b9b67
Add a function to load a unet from a state dict.
2023-11-27 17:41:29 -05:00
comfyanonymous
f30b992b18
.sigma and .timestep now return tensors on the same device as the input.
2023-11-27 16:41:33 -05:00
comfyanonymous
13fdee6abf
Try to free memory for both cond+uncond before inference.
2023-11-27 14:55:40 -05:00
comfyanonymous
be71bb5e13
Tweak memory inference calculations a bit.
2023-11-27 14:04:16 -05:00
comfyanonymous
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
comfyanonymous
5d6dfce548
Fix importing diffusers unets.
2023-11-24 20:35:29 -05:00
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
comfyanonymous
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
comfyanonymous
410bf07771
Make VAE memory estimation take dtype into account.
2023-11-22 18:17:19 -05:00
comfyanonymous
32447f0c39
Add sampling_settings so models can specify specific sampling settings.
2023-11-22 17:24:00 -05:00
comfyanonymous
c3ae99a749
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
2023-11-22 03:23:16 -05:00
comfyanonymous
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
comfyanonymous
6a491ebe27
Allow model config to preprocess the vae state dict on load.
2023-11-21 16:29:18 -05:00
comfyanonymous
cd4fc77d5f
Add taesd and taesdxl to VAELoader node.
...
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
comfyanonymous
ce67dcbcda
Make it easy for models to process the unet state dict on load.
2023-11-20 23:17:53 -05:00
comfyanonymous
d9d8702d8d
percent_to_sigma now returns a float instead of a tensor.
2023-11-18 23:20:29 -05:00
comfyanonymous
0cf4e86939
Add some command line arguments to store text encoder weights in fp8.
...
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
comfyanonymous
107e78b1cb
Add support for loading SSD1B diffusers unet version.
...
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
comfyanonymous
9f00a18095
Fix potential issues.
2023-11-16 14:59:54 -05:00
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
comfyanonymous
dcec1047e6
Invert the start and end percentages in the code.
...
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.
percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.
Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00
comfyanonymous
57eea0efbb
heunpp2 sampler.
2023-11-14 23:50:55 -05:00
comfyanonymous
728613bb3e
Fix last pr.
2023-11-14 14:41:31 -05:00
comfyanonymous
ec3d0ab432
Merge branch 'master' of https://github.com/Jannchie/ComfyUI
2023-11-14 14:38:07 -05:00
comfyanonymous
c962884a5c
Make bislerp work on GPU.
2023-11-14 11:38:36 -05:00
comfyanonymous
420beeeb05
Clean up and refactor sampler code.
...
This should make it much easier to write custom nodes with kdiffusion type
samplers.
2023-11-14 00:39:34 -05:00
Jianqi Pan
f2e49b1d57
fix: adaptation to older versions of pytroch
2023-11-14 14:32:05 +09:00
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
2023-11-14 00:08:12 -05:00
comfyanonymous
7339479b10
Disable xformers when it can't load properly.
2023-11-13 12:31:10 -05:00
comfyanonymous
4781819a85
Make memory estimation aware of model dtype.
2023-11-12 04:28:26 -05:00
comfyanonymous
dd4ba68b6e
Allow different models to estimate memory usage differently.
2023-11-12 04:03:52 -05:00