Commit Graph

813 Commits

Author SHA1 Message Date
doctorpangloss
955fcb8bb0 Fix API 2024-02-16 08:47:11 -08:00
comfyanonymous
f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
doctorpangloss
d29028dd3e Fix distributed prompting 2024-02-16 06:17:06 -08:00
comfyanonymous
5e06baf112 Stable Cascade Stage A. 2024-02-16 06:30:39 -05:00
doctorpangloss
0546b01080 Tweak RPC parameters 2024-02-15 23:43:14 -08:00
doctorpangloss
a62f2c00ed Fix RPC issue in frontend 2024-02-15 21:54:13 -08:00
doctorpangloss
cd54bf3b3d Tweak worker RPC creation 2024-02-15 20:28:11 -08:00
doctorpangloss
a5d51c1ae2 Tweak API 2024-02-15 19:08:29 -08:00
comfyanonymous
aeaeca10bd Small refactor of is_device_* functions. 2024-02-15 21:10:10 -05:00
doctorpangloss
06e74226df Add external address parameter 2024-02-15 17:39:15 -08:00
doctorpangloss
7c6b8ecb02 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-15 12:56:07 -08:00
comfyanonymous
38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 2024-02-13 00:01:08 -05:00
doctorpangloss
8b4e5cee61 Fix _model_patcher typo 2024-02-12 15:12:49 -08:00
doctorpangloss
b4eda2d5a4 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-12 14:24:20 -08:00
comfyanonymous
7dd352cbd7 Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI 2024-02-11 12:23:30 -05:00
Jedrzej Kosinski
f44225fd5f Fix infinite while loop being possible in ddim_scheduler 2024-02-09 17:11:34 -06:00
doctorpangloss
15ff903b35 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-09 12:19:00 -08:00
comfyanonymous
25a4805e51 Add a way to set different conditioning for the controlnet. 2024-02-09 14:13:31 -05:00
doctorpangloss
f195230e2a More tweaks to cli args 2024-02-09 01:40:27 -08:00
doctorpangloss
a3f9d007d4 Fix CLI args issues 2024-02-09 01:20:57 -08:00
doctorpangloss
bdc843ced1 Tweak this message 2024-02-08 22:56:06 -08:00
doctorpangloss
b3f1ce7ef0 Update comment 2024-02-08 21:15:58 -08:00
doctorpangloss
d5bcaa515c Specify worker,frontend as default roles 2024-02-08 20:37:16 -08:00
doctorpangloss
54d419d855 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-02-08 20:31:05 -08:00
doctorpangloss
80f8c40248 Distributed queueing with amqp-compatible servers like RabbitMQ.
- Binary previews are not yet supported
 - Use `--distributed-queue-connection-uri=amqp://guest:guest@rabbitmqserver/`
 - Roles supported: frontend, worker or both (see `--help`)
 - Run `comfy-worker` for a lightweight worker you can wrap your head
   around
 - Workers and frontends must have the same directory structure (set
   with `--cwd`) and supported nodes. Frontends must still have access
   to inputs and outputs.
 - Configuration notes:

   distributed_queue_connection_uri (Optional[str]): Servers and clients will connect to this AMQP URL to form a distributed queue and exchange prompt execution requests and progress updates.
   distributed_queue_roles (List[str]): Specifies one or more roles for the distributed queue. Acceptable values are "worker" or "frontend", or both by writing the flag twice with each role. Frontends will start the web UI and connect to the provided AMQP URL to submit prompts; workers will pull requests off the AMQP URL.
   distributed_queue_name (str): This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name, followed by a '.', then the user ID.
2024-02-08 20:24:27 -08:00
doctorpangloss
0673262940 Fix entrypoints, add comfyui-worker entrypoint 2024-02-08 19:08:42 -08:00
doctorpangloss
72e92514a4 Better compatibility with pre-existing prompt_worker method 2024-02-08 18:07:37 -08:00
doctorpangloss
92898b8c9d Improved support for distributed queues 2024-02-08 14:55:07 -08:00
doctorpangloss
3367362cec Fix directml again now that I understand what the command line is doing 2024-02-08 10:17:49 -08:00
doctorpangloss
09838ed604 Update readme, remove unused import 2024-02-08 10:09:47 -08:00
doctorpangloss
04ce040d28 Fix commonpath / using arg.cwd on Windows 2024-02-08 09:30:16 -08:00
Benjamin Berman
8508a5a853 Fix args.directml is not None error 2024-02-08 08:40:13 -08:00
Benjamin Berman
b8fc850b47 Correctly preserves your installed torch when installed like pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git 2024-02-08 08:36:05 -08:00
blepping
a352c021ec Allow custom samplers to request discard penultimate sigma 2024-02-08 02:24:23 -07:00
Benjamin Berman
e45433755e Include missing seed parameter in sample workflow 2024-02-07 22:18:46 -08:00
doctorpangloss
123c512a84 Fix compatibility with Python 3.9, 3.10, fix Configuration class declaration issue 2024-02-07 21:52:20 -08:00
comfyanonymous
c661a8b118 Don't use numpy for calculating sigmas. 2024-02-07 18:52:51 -05:00
doctorpangloss
25c28867d2 Update script examples 2024-02-07 15:52:26 -08:00
doctorpangloss
d9b4607c36 Add locks to model_management to prevent multiple copies of the models from being loaded at the same time 2024-02-07 15:18:13 -08:00
doctorpangloss
8e9052c843 Merge with upstream 2024-02-07 14:27:50 -08:00
doctorpangloss
1b2ea61345 Improved API support
- Run comfyui workflows directly inside other python applications using
   EmbeddedComfyClient.
 - Optional telemetry in prompts and models using anonymity preserving
   Plausible self-hosted or hosted.
 - Better OpenAPI schema
 - Basic support for distributed ComfyUI backends. Limitations: no
   progress reporting, no easy way to start your own distributed
   backend, requires RabbitMQ as a message broker.
2024-02-07 14:20:21 -08:00
comfyanonymous
236bda2683 Make minimum tile size the size of the overlap. 2024-02-05 01:29:26 -05:00
comfyanonymous
66e28ef45c Don't use is_bf16_supported to check for fp16 support. 2024-02-04 20:53:35 -05:00
comfyanonymous
24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 2024-02-04 13:23:43 -05:00
comfyanonymous
4b0239066d Always use fp16 for the text encoders. 2024-02-02 10:02:49 -05:00
comfyanonymous
da7a8df0d2 Put VAE key name in model config. 2024-01-30 02:24:38 -05:00
doctorpangloss
32d83e52ff Fix CheckpointLoader even though it is deprecated 2024-01-29 17:20:10 -08:00
doctorpangloss
0d2cc553bc Revert "Remove unused configs contents"
This reverts commit 65549c39f1.
2024-01-29 17:03:27 -08:00
doctorpangloss
2400da51e5 PyInstaller 2024-01-29 17:02:45 -08:00
doctorpangloss
82edb2ff0e Merge with latest upstream. 2024-01-29 15:06:31 -08:00
doctorpangloss
65549c39f1 Remove unused configs contents 2024-01-29 14:14:46 -08:00
comfyanonymous
89507f8adf Remove some unused imports. 2024-01-25 23:42:37 -05:00
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler (#2619)
self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2024-01-23 03:47:01 -05:00
comfyanonymous
4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
comfyanonymous
78a70fda87 Remove useless import. 2024-01-19 15:38:05 -05:00
comfyanonymous
d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
comfyanonymous
f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 2024-01-15 03:10:22 -05:00
comfyanonymous
2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
comfyanonymous
53c8a99e6c Make server storage the default.
Remove --server-storage argument.
2024-01-11 17:21:40 -05:00
comfyanonymous
977eda19a6 Don't round noise mask. 2024-01-11 03:29:58 -05:00
comfyanonymous
10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
2024-01-11 03:15:27 -05:00
comfyanonymous
1a57423d30 Fix issue when using multiple t2i adapters with batched images. 2024-01-10 04:00:49 -05:00
comfyanonymous
6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 2024-01-09 13:46:52 -05:00
pythongosssss
235727fed7
Store user settings/data on the server and multi user support (#2160)
* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors
2024-01-08 17:06:44 -05:00
comfyanonymous
c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
2024-01-07 13:52:08 -05:00
comfyanonymous
aaa9017302 Add attention mask support to sub quad attention. 2024-01-07 04:13:58 -05:00
comfyanonymous
0c2c9fbdfa Support attention mask in split attention. 2024-01-06 13:16:48 -05:00
comfyanonymous
3ad0191bfb Implement attention mask on xformers. 2024-01-06 04:33:03 -05:00
doctorpangloss
42232f4d20 fix module ref 2024-01-03 20:12:33 -08:00
doctorpangloss
345825dfb5 Fix issues with missing __init__ in upscaler, move web/ directory to comfy/web so that the need for symbolic link support on windows is eliminated 2024-01-03 16:35:00 -08:00
doctorpangloss
d31298ac60 gligen is missing math import 2024-01-03 16:01:44 -08:00
doctorpangloss
369aeb598f Merge upstream, fix 3.12 compatibility, fix nightlies issue, fix broken node 2024-01-03 16:00:36 -08:00
comfyanonymous
8c6493578b Implement noise augmentation for SD 4X upscale model. 2024-01-03 14:27:11 -05:00
comfyanonymous
ef4f6037cb Fix model patches not working in custom sampling scheduler nodes. 2024-01-03 12:16:30 -05:00
comfyanonymous
a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
comfyanonymous
2c4e92a98b Fix regression. 2024-01-02 14:41:33 -05:00
comfyanonymous
5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
2024-01-02 13:24:34 -05:00
comfyanonymous
a47f609f90 Auto detect out_channels from model. 2024-01-02 01:50:57 -05:00
comfyanonymous
79f73a4b33 Remove useless code. 2024-01-02 01:50:29 -05:00
comfyanonymous
1b103e0cb2 Add argument to run the VAE on the CPU. 2023-12-30 05:49:07 -05:00
comfyanonymous
12e822c6c8 Use function to calculate model size in model patcher. 2023-12-28 21:46:20 -05:00
comfyanonymous
e1e322cf69 Load weights that can't be lowvramed to target device. 2023-12-28 21:41:10 -05:00
comfyanonymous
c782144433 Fix clip vision lowvram mode not working. 2023-12-27 13:50:57 -05:00
comfyanonymous
f21bb41787 Fix taesd VAE in lowvram mode. 2023-12-26 12:52:21 -05:00
comfyanonymous
61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 2023-12-26 05:02:02 -05:00
comfyanonymous
d0165d819a Fix SVD lowvram mode. 2023-12-24 07:13:18 -05:00
comfyanonymous
a252963f95 --disable-smart-memory now unloads everything like it did originally. 2023-12-23 04:25:06 -05:00
comfyanonymous
36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
comfyanonymous
261bcbb0d9 A few missing comfy ops in the VAE. 2023-12-22 04:05:42 -05:00
comfyanonymous
9a7619b72d Fix regression with inpaint model. 2023-12-19 02:32:59 -05:00
comfyanonymous
571ea8cdcc Fix SAG not working with cfg 1.0 2023-12-18 17:03:32 -05:00
comfyanonymous
8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 2023-12-18 12:54:23 -05:00
comfyanonymous
2258f85159 Support stable zero 123 model.
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
2023-12-18 03:48:04 -05:00
comfyanonymous
2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 2023-12-17 16:59:21 -05:00
comfyanonymous
e45d920ae3 Don't resize clip vision image when the size is already good. 2023-12-16 03:06:10 -05:00
comfyanonymous
13e6d5366e Switch clip vision to manual cast.
Make it use the same dtype as the text encoder.
2023-12-16 02:47:26 -05:00
comfyanonymous
719fa0866f Set clip vision model in eval mode so it works without inference mode. 2023-12-15 18:53:08 -05:00
Hari
574363a8a6 Implement Perp-Neg 2023-12-16 00:28:16 +05:30
comfyanonymous
a5056cfb1f Remove useless code. 2023-12-15 01:28:16 -05:00
comfyanonymous
329c571993 Improve code legibility. 2023-12-14 11:41:49 -05:00