Accept workflows from the command line

This commit is contained in:
Benjamin Berman 2025-05-07 14:53:39 -07:00
parent da2cbf7c91
commit b6d3f1fb08
35 changed files with 459 additions and 319 deletions

244
README.md
View File

@ -3,7 +3,6 @@ ComfyUI LTS
A vanilla, up-to-date fork of [ComfyUI](https://github.com/comfyanonymous/comfyui) intended for long term support (LTS) from [AppMana](https://appmana.com) and [Hidden Switch](https://hiddenswitch.com).
### New Features
- To run, just type `comfyui` in your command line and press enter.
@ -25,42 +24,46 @@ ComfyUI lets you design and execute advanced stable diffusion pipelines using a
## Get Started
#### [Desktop Application](https://www.comfy.org/download)
- The easiest way to get started.
- The easiest way to get started.
- Available on Windows & macOS.
#### [Windows Portable Package](#installing)
- Get the latest commits and completely portable.
- Available on Windows.
#### [Manual Install](#manual-install-windows-linux)
Supports all operating systems and GPU types (NVIDIA, AMD, Intel, Apple Silicon, Ascend).
## [Examples](https://comfyanonymous.github.io/ComfyUI_examples/)
See what ComfyUI can do with the [example workflows](https://comfyanonymous.github.io/ComfyUI_examples/).
## Upstream Features
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Image Models
- SD1.x, SD2.x,
- [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
- [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/)
- [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
- Pixart Alpha and Sigma
- [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
- [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
- [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
- [Lumina Image 2.0](https://comfyanonymous.github.io/ComfyUI_examples/lumina2/)
- [HiDream](https://comfyanonymous.github.io/ComfyUI_examples/hidream/)
- SD1.x, SD2.x,
- [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
- [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/)
- [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)
- Pixart Alpha and Sigma
- [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/)
- [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/)
- [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/)
- [Lumina Image 2.0](https://comfyanonymous.github.io/ComfyUI_examples/lumina2/)
- [HiDream](https://comfyanonymous.github.io/ComfyUI_examples/hidream/)
- Video Models
- [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/)
- [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
- [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/)
- [Hunyuan Video](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/)
- [Nvidia Cosmos](https://comfyanonymous.github.io/ComfyUI_examples/cosmos/)
- [Wan 2.1](https://comfyanonymous.github.io/ComfyUI_examples/wan/)
- [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/)
- [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/)
- [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/)
- [Hunyuan Video](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/)
- [Nvidia Cosmos](https://comfyanonymous.github.io/ComfyUI_examples/cosmos/)
- [Wan 2.1](https://comfyanonymous.github.io/ComfyUI_examples/wan/)
- 3D Models
- [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2)
- [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2)
- [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/)
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
@ -327,7 +330,6 @@ For models compatible with Cambricon Extension for PyTorch (`torch_mlu`). Here's
2. Next, install the PyTorch (`torch_mlu`) extension following the instructions on the [Installation](https://www.cambricon.com/docs/sdk_1.15.0/cambricon_pytorch_1.17.0/user_guide_1.9/index.html)
3. Launch ComfyUI by running `python main.py`
## Manual Install (Windows, Linux, macOS) For Development
1. Clone this repo:
@ -434,7 +436,7 @@ Improve the performance of your Mochi model video generation using **Sage Attent
|--------|---------------|---------------|--------------------------|
| A5000 | 7.52s/it | 5.81s/it | 5.00s/it (but corrupted) |
[Use the default Mochi Workflow.](https://github.com/comfyanonymous/ComfyUI_examples/raw/refs/heads/master/mochi/mochi_text_to_video_example.webp) This does not require any custom nodes or any change to your workflow.
[Use the default Mochi Workflow.](https://github.com/comfyanonymous/ComfyUI_examples/raw/refs/heads/master/mochi/mochi_text_to_video_example.webp) This does not require any custom nodes or any change to your workflow.
Install the dependencies for Windows or Linux using the `withtriton` component, or install the specific dependencies you need from [requirements-triton.txt](./requirements-triton.txt):
@ -491,6 +493,7 @@ To use the Cosmos upsampler, install the prerequisites:
uv pip install loguru pynvml
uv pip install --no-deps git+https://github.com/NVIDIA/Cosmos.git
```
Then, use the workflow embedded in the upsampled prompt by dragging and dropping the upsampled animation into your workspace.
The Cosmos upsampler ought to improve any text-to-image video generation pipeline. Use the `Video2World` upsampler nodes to download Pixtral-12b and upsample for an image to video workflow using NVIDIA's default prompt. Since Pixtral is not fine tuned, the improvement may not be significant over using another LLM.
@ -539,6 +542,7 @@ some_directory/some_code.py
Then, if your `NODE_CLASS_MAPPINGS` are declared in `__init__.py`, use the following as a `pyproject.toml`, substituting your actual project name:
**pyproject.toml**
```toml
[project]
name = "my_comfyui_nodes"
@ -832,38 +836,38 @@ The default installation includes a fast latent preview method that's low-resolu
## Keyboard Shortcuts
| Keybind | Explanation |
|------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| `Ctrl` + `Enter` | Queue up current graph for generation |
| `Ctrl` + `Shift` + `Enter` | Queue up current graph as first for generation |
| `Ctrl` + `Alt` + `Enter` | Cancel current generation |
| `Ctrl` + `Z`/`Ctrl` + `Y` | Undo/Redo |
| `Ctrl` + `S` | Save workflow |
| `Ctrl` + `O` | Load workflow |
| `Ctrl` + `A` | Select all nodes |
| `Alt `+ `C` | Collapse/uncollapse selected nodes |
| `Ctrl` + `M` | Mute/unmute selected nodes |
| Keybind | Explanation |
|----------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| `Ctrl` + `Enter` | Queue up current graph for generation |
| `Ctrl` + `Shift` + `Enter` | Queue up current graph as first for generation |
| `Ctrl` + `Alt` + `Enter` | Cancel current generation |
| `Ctrl` + `Z`/`Ctrl` + `Y` | Undo/Redo |
| `Ctrl` + `S` | Save workflow |
| `Ctrl` + `O` | Load workflow |
| `Ctrl` + `A` | Select all nodes |
| `Alt `+ `C` | Collapse/uncollapse selected nodes |
| `Ctrl` + `M` | Mute/unmute selected nodes |
| `Ctrl` + `B` | Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) |
| `Delete`/`Backspace` | Delete selected nodes |
| `Ctrl` + `Backspace` | Delete the current graph |
| `Space` | Move the canvas around when held and moving the cursor |
| `Ctrl`/`Shift` + `Click` | Add clicked node to selection |
| `Ctrl` + `C`/`Ctrl` + `V` | Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes) |
| `Ctrl` + `C`/`Ctrl` + `Shift` + `V` | Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes) |
| `Space` | Move the canvas around when held and moving the cursor |
| `Ctrl`/`Shift` + `Click` | Add clicked node to selection |
| `Ctrl` + `C`/`Ctrl` + `V` | Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes) |
| `Ctrl` + `C`/`Ctrl` + `Shift` + `V` | Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes) |
| `Shift` + `Drag` | Move multiple selected nodes at the same time |
| `Ctrl` + `D` | Load default graph |
| `Alt` + `+` | Canvas Zoom in |
| `Alt` + `-` | Canvas Zoom out |
| `Alt` + `+` | Canvas Zoom in |
| `Alt` + `-` | Canvas Zoom out |
| `Ctrl` + `Shift` + LMB + Vertical drag | Canvas Zoom in/out |
| `P` | Pin/Unpin selected nodes |
| `P` | Pin/Unpin selected nodes |
| `Ctrl` + `G` | Group selected nodes |
| `Q` | Toggle visibility of the queue |
| `H` | Toggle visibility of history |
| `R` | Refresh graph |
| `F` | Show/Hide menu |
| `.` | Fit view to selection (Whole graph when nothing is selected) |
| Double-Click LMB | Open node quick search palette |
| `Shift` + Drag | Move multiple wires at once |
| `Q` | Toggle visibility of the queue |
| `H` | Toggle visibility of history |
| `R` | Refresh graph |
| `F` | Show/Hide menu |
| `.` | Fit view to selection (Whole graph when nothing is selected) |
| Double-Click LMB | Open node quick search palette |
| `Shift` + Drag | Move multiple wires at once |
| `Ctrl` + `Alt` + LMB | Disconnect all wires from clicked slot |
`Ctrl` can also be replaced with `Cmd` instead for macOS users
@ -887,31 +891,26 @@ You can pass additional extra model path configurations with one or more copies
### Command Line Arguments
```
usage: comfyui.exe [-h] [-c CONFIG_FILE] [--write-out-config-file CONFIG_OUTPUT_PATH] [-w CWD] [--base-paths BASE_PATHS [BASE_PATHS ...]] [-H [IP]] [--port PORT]
[--enable-cors-header [ORIGIN]] [--max-upload-size MAX_UPLOAD_SIZE] [--extra-model-paths-config PATH [PATH ...]]
[--output-directory OUTPUT_DIRECTORY] [--temp-directory TEMP_DIRECTORY] [--input-directory INPUT_DIRECTORY] [--auto-launch] [--disable-auto-launch]
[--cuda-device DEVICE_ID] [--cuda-malloc | --disable-cuda-malloc] [--force-fp32 | --force-fp16 | --force-bf16]
[--bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet] [--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae]
[--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc] [--directml [DIRECTML_DEVICE]] [--disable-ipex-optimize]
[--preview-method [none,auto,latent2rgb,taesd]] [--preview-size PREVIEW_SIZE] [--cache-lru CACHE_LRU]
[--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention] [--disable-xformers] [--disable-flash-attn]
[--disable-sage-attention] [--force-upcast-attention | --dont-upcast-attention]
[--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu] [--reserve-vram RESERVE_VRAM]
[--default-hashing-function {md5,sha1,sha256,sha512}] [--disable-smart-memory] [--deterministic] [--fast] [--dont-print-server]
[--quick-test-for-ci] [--windows-standalone-build] [--disable-metadata] [--disable-all-custom-nodes] [--multi-user] [--create-directories]
[--plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL] [--plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN]
[--analytics-use-identity-provider] [--distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI] [--distributed-queue-worker]
[--distributed-queue-frontend] [--distributed-queue-name DISTRIBUTED_QUEUE_NAME] [--external-address EXTERNAL_ADDRESS]
[--logging-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [--disable-known-models] [--max-queue-size MAX_QUEUE_SIZE]
[--otel-service-name OTEL_SERVICE_NAME] [--otel-service-version OTEL_SERVICE_VERSION] [--otel-exporter-otlp-endpoint OTEL_EXPORTER_OTLP_ENDPOINT]
[--force-channels-last] [--force-hf-local-dir-mode] [--front-end-version FRONT_END_VERSION] [--front-end-root FRONT_END_ROOT]
[--executor-factory EXECUTOR_FACTORY] [--openai-api-key OPENAI_API_KEY] [--user-directory USER_DIRECTORY] [--blip-model-url BLIP_MODEL_URL]
[--blip-model-vqa-url BLIP_MODEL_VQA_URL] [--sam-model-vith-url SAM_MODEL_VITH_URL] [--sam-model-vitl-url SAM_MODEL_VITL_URL]
[--sam-model-vitb-url SAM_MODEL_VITB_URL] [--history-display-limit HISTORY_DISPLAY_LIMIT] [--ffmpeg-bin-path FFMPEG_BIN_PATH]
[--ffmpeg-extra-codecs FFMPEG_EXTRA_CODECS] [--wildcards-path WILDCARDS_PATH] [--wildcard-api WILDCARD_API] [--photoprism-host PHOTOPRISM_HOST]
[--immich-host IMMICH_HOST] [--ideogram-session-cookie IDEOGRAM_SESSION_COOKIE] [--annotator-ckpts-path ANNOTATOR_CKPTS_PATH] [--use-symlinks]
[--ort-providers ORT_PROVIDERS] [--vfi-ops-backend VFI_OPS_BACKEND] [--dependency-version DEPENDENCY_VERSION] [--mmdet-skip] [--sam-editor-cpu]
[--sam-editor-model SAM_EDITOR_MODEL] [--custom-wildcards CUSTOM_WILDCARDS] [--disable-gpu-opencv]
usage: comfyui [-h] [-c CONFIG_FILE] [--write-out-config-file CONFIG_OUTPUT_PATH] [-w CWD] [--base-paths BASE_PATHS [BASE_PATHS ...]] [-H [IP]] [--port PORT]
[--enable-cors-header [ORIGIN]] [--max-upload-size MAX_UPLOAD_SIZE] [--base-directory BASE_DIRECTORY] [--extra-model-paths-config PATH [PATH ...]]
[--output-directory OUTPUT_DIRECTORY] [--temp-directory TEMP_DIRECTORY] [--input-directory INPUT_DIRECTORY] [--auto-launch] [--disable-auto-launch]
[--cuda-device DEVICE_ID] [--cuda-malloc | --disable-cuda-malloc] [--force-fp32 | --force-fp16 | --force-bf16]
[--fp32-unet | --fp64-unet | --bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet] [--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae]
[--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc | --bf16-text-enc] [--directml [DIRECTML_DEVICE]]
[--oneapi-device-selector SELECTOR_STRING] [--disable-ipex-optimize] [--preview-method [none,auto,latent2rgb,taesd]] [--preview-size PREVIEW_SIZE]
[--cache-classic | --cache-lru CACHE_LRU | --cache-none]
[--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention | --use-sage-attention | --use-flash-attention] [--disable-xformers]
[--force-upcast-attention | --dont-upcast-attention] [--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu] [--reserve-vram RESERVE_VRAM]
[--default-hashing-function {md5,sha1,sha256,sha512}] [--disable-smart-memory] [--deterministic] [--fast [FAST ...]] [--dont-print-server] [--quick-test-for-ci]
[--windows-standalone-build] [--disable-metadata] [--disable-all-custom-nodes] [--multi-user] [--create-directories] [--log-stdout]
[--plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL] [--plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN] [--analytics-use-identity-provider]
[--distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI] [--distributed-queue-worker] [--distributed-queue-frontend]
[--distributed-queue-name DISTRIBUTED_QUEUE_NAME] [--external-address EXTERNAL_ADDRESS] [--logging-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]
[--disable-known-models] [--max-queue-size MAX_QUEUE_SIZE] [--otel-service-name OTEL_SERVICE_NAME] [--otel-service-version OTEL_SERVICE_VERSION]
[--otel-exporter-otlp-endpoint OTEL_EXPORTER_OTLP_ENDPOINT] [--force-channels-last] [--force-hf-local-dir-mode] [--front-end-version FRONT_END_VERSION]
[--panic-when PANIC_WHEN] [--front-end-root FRONT_END_ROOT] [--executor-factory EXECUTOR_FACTORY] [--openai-api-key OPENAI_API_KEY]
[--ideogram-api-key IDEOGRAM_API_KEY] [--anthropic-api-key ANTHROPIC_API_KEY] [--user-directory USER_DIRECTORY] [--enable-compress-response-body]
[--workflows WORKFLOWS [WORKFLOWS ...]]
options:
-h, --help show this help message and exit
@ -919,27 +918,28 @@ options:
config file path
--write-out-config-file CONFIG_OUTPUT_PATH
takes the current command line args and writes them out to a config file at the given path, then exits
-w CWD, --cwd CWD Specify the working directory. If not set, this is the current working directory. models/, input/, output/ and other directories will be
located here by default. [env var: COMFYUI_CWD]
-w CWD, --cwd CWD Specify the working directory. If not set, this is the current working directory. models/, input/, output/ and other directories will be located here by
default. [env var: COMFYUI_CWD]
--base-paths BASE_PATHS [BASE_PATHS ...]
Additional base paths for custom nodes, models and inputs. [env var: COMFYUI_BASE_PATHS]
-H [IP], --listen [IP]
Specify the IP address to listen on (default: 127.0.0.1). You can give a list of ip addresses by separating them with a comma like:
127.2.2.2,127.3.3.3 If --listen is provided without an argument, it defaults to 0.0.0.0,:: (listens on all ipv4 and ipv6) [env var:
COMFYUI_LISTEN]
Specify the IP address to listen on (default: 127.0.0.1). You can give a list of ip addresses by separating them with a comma like: 127.2.2.2,127.3.3.3
If --listen is provided without an argument, it defaults to 0.0.0.0,:: (listens on all ipv4 and ipv6) [env var: COMFYUI_LISTEN]
--port PORT Set the listen port. [env var: COMFYUI_PORT]
--enable-cors-header [ORIGIN]
Enable CORS (Cross-Origin Resource Sharing) with optional origin or allow all with default '*'. [env var: COMFYUI_ENABLE_CORS_HEADER]
--max-upload-size MAX_UPLOAD_SIZE
Set the maximum upload size in MB. [env var: COMFYUI_MAX_UPLOAD_SIZE]
--base-directory BASE_DIRECTORY
Set the ComfyUI base directory for models, custom_nodes, input, output, temp, and user directories. [env var: COMFYUI_BASE_DIRECTORY]
--extra-model-paths-config PATH [PATH ...]
Load one or more extra_model_paths.yaml files. [env var: COMFYUI_EXTRA_MODEL_PATHS_CONFIG]
--output-directory OUTPUT_DIRECTORY
Set the ComfyUI output directory. [env var: COMFYUI_OUTPUT_DIRECTORY]
Set the ComfyUI output directory. Overrides --base-directory. [env var: COMFYUI_OUTPUT_DIRECTORY]
--temp-directory TEMP_DIRECTORY
Set the ComfyUI temp directory (default is in the ComfyUI directory). [env var: COMFYUI_TEMP_DIRECTORY]
Set the ComfyUI temp directory (default is in the ComfyUI directory). Overrides --base-directory. [env var: COMFYUI_TEMP_DIRECTORY]
--input-directory INPUT_DIRECTORY
Set the ComfyUI input directory. [env var: COMFYUI_INPUT_DIRECTORY]
Set the ComfyUI input directory. Overrides --base-directory. [env var: COMFYUI_INPUT_DIRECTORY]
--auto-launch Automatically launch ComfyUI in the default browser. [env var: COMFYUI_AUTO_LAUNCH]
--disable-auto-launch
Disable auto launching the browser. [env var: COMFYUI_DISABLE_AUTO_LAUNCH]
@ -951,8 +951,10 @@ options:
--force-fp32 Force fp32 (If this makes your GPU work better please report it). [env var: COMFYUI_FORCE_FP32]
--force-fp16 Force fp16. [env var: COMFYUI_FORCE_FP16]
--force-bf16 Force bf16. [env var: COMFYUI_FORCE_BF16]
--bf16-unet Run the UNET in bf16. This should only be used for testing stuff. [env var: COMFYUI_BF16_UNET]
--fp16-unet Store unet weights in fp16. [env var: COMFYUI_FP16_UNET]
--fp32-unet Run the diffusion model in fp32. [env var: COMFYUI_FP32_UNET]
--fp64-unet Run the diffusion model in fp64. [env var: COMFYUI_FP64_UNET]
--bf16-unet Run the diffusion model in bf16. [env var: COMFYUI_BF16_UNET]
--fp16-unet Run the diffusion model in fp16 [env var: COMFYUI_FP16_UNET]
--fp8_e4m3fn-unet Store unet weights in fp8_e4m3fn. [env var: COMFYUI_FP8_E4M3FN_UNET]
--fp8_e5m2-unet Store unet weights in fp8_e5m2. [env var: COMFYUI_FP8_E5M2_UNET]
--fp16-vae Run the VAE in fp16, might cause black images. [env var: COMFYUI_FP16_VAE]
@ -964,26 +966,31 @@ options:
--fp8_e5m2-text-enc Store text encoder weights in fp8 (e5m2 variant). [env var: COMFYUI_FP8_E5M2_TEXT_ENC]
--fp16-text-enc Store text encoder weights in fp16. [env var: COMFYUI_FP16_TEXT_ENC]
--fp32-text-enc Store text encoder weights in fp32. [env var: COMFYUI_FP32_TEXT_ENC]
--bf16-text-enc Store text encoder weights in bf16. [env var: COMFYUI_BF16_TEXT_ENC]
--directml [DIRECTML_DEVICE]
Use torch-directml. [env var: COMFYUI_DIRECTML]
--oneapi-device-selector SELECTOR_STRING
Sets the oneAPI device(s) this instance will use. [env var: COMFYUI_ONEAPI_DEVICE_SELECTOR]
--disable-ipex-optimize
Disables ipex.optimize when loading models with Intel GPUs. [env var: COMFYUI_DISABLE_IPEX_OPTIMIZE]
Disables ipex.optimize default when loading models with Intel's Extension for Pytorch. [env var: COMFYUI_DISABLE_IPEX_OPTIMIZE]
--preview-method [none,auto,latent2rgb,taesd]
Default preview method for sampler nodes. [env var: COMFYUI_PREVIEW_METHOD]
--preview-size PREVIEW_SIZE
Sets the maximum preview size for sampler nodes. [env var: COMFYUI_PREVIEW_SIZE]
--cache-classic WARNING: Unused. Use the old style (aggressive) caching. [env var: COMFYUI_CACHE_CLASSIC]
--cache-lru CACHE_LRU
Use LRU caching with a maximum of N node results cached. May use more RAM/VRAM. [env var: COMFYUI_CACHE_LRU]
--cache-none Reduced RAM/VRAM usage at the expense of executing every node for each run. [env var: COMFYUI_CACHE_NONE]
--use-split-cross-attention
Use the split cross attention optimization. Ignored when xformers is used. [env var: COMFYUI_USE_SPLIT_CROSS_ATTENTION]
--use-quad-cross-attention
Use the sub-quadratic cross attention optimization . Ignored when xformers is used. [env var: COMFYUI_USE_QUAD_CROSS_ATTENTION]
--use-pytorch-cross-attention
Use the new pytorch 2.0 cross attention function. [env var: COMFYUI_USE_PYTORCH_CROSS_ATTENTION]
Use the new pytorch 2.0 cross attention function (default). [env var: COMFYUI_USE_PYTORCH_CROSS_ATTENTION]
--use-sage-attention Use sage attention. [env var: COMFYUI_USE_SAGE_ATTENTION]
--use-flash-attention
Use FlashAttention. [env var: COMFYUI_USE_FLASH_ATTENTION]
--disable-xformers Disable xformers. [env var: COMFYUI_DISABLE_XFORMERS]
--disable-flash-attn Disable Flash Attention [env var: COMFYUI_DISABLE_FLASH_ATTN]
--disable-sage-attention
Disable Sage Attention [env var: COMFYUI_DISABLE_SAGE_ATTENTION]
--force-upcast-attention
Force enable attention upcasting, please report if it fixes black images. [env var: COMFYUI_FORCE_UPCAST_ATTENTION]
--dont-upcast-attention
@ -995,8 +1002,8 @@ options:
--novram When lowvram isn't enough. [env var: COMFYUI_NOVRAM]
--cpu To use the CPU for everything (slow). [env var: COMFYUI_CPU]
--reserve-vram RESERVE_VRAM
Set the amount of vram in GB you want to reserve for use by your OS/other software. By default some amount is reserved depending on your OS.
[env var: COMFYUI_RESERVE_VRAM]
Set the amount of vram in GB you want to reserve for use by your OS/other software. By default some amount is reserved depending on your OS. [env var:
COMFYUI_RESERVE_VRAM]
--default-hashing-function {md5,sha1,sha256,sha512}
Allows you to choose the hash function to use for duplicate filename / contents comparison. Default is sha256. [env var:
COMFYUI_DEFAULT_HASHING_FUNCTION]
@ -1004,17 +1011,19 @@ options:
Force ComfyUI to agressively offload to regular ram instead of keeping models in vram when it can. [env var: COMFYUI_DISABLE_SMART_MEMORY]
--deterministic Make pytorch use slower deterministic algorithms when it can. Note that this might not make images deterministic in all cases. [env var:
COMFYUI_DETERMINISTIC]
--fast Enable some untested and potentially quality deteriorating optimizations. [env var: COMFYUI_FAST]
--fast [FAST ...] Enable some untested and potentially quality deteriorating optimizations. Pass a list specific optimizations if you only want to enable specific ones.
Current valid optimizations: fp16_accumulation fp8_matrix_mult cublas_ops [env var: COMFYUI_FAST]
--dont-print-server Don't print server output. [env var: COMFYUI_DONT_PRINT_SERVER]
--quick-test-for-ci Quick test for CI. Raises an error if nodes cannot be imported, [env var: COMFYUI_QUICK_TEST_FOR_CI]
--windows-standalone-build
Windows standalone build: Enable convenient things that most people using the standalone windows build will probably enjoy (like auto opening
the page on startup). [env var: COMFYUI_WINDOWS_STANDALONE_BUILD]
Windows standalone build: Enable convenient things that most people using the standalone windows build will probably enjoy (like auto opening the page
on startup). [env var: COMFYUI_WINDOWS_STANDALONE_BUILD]
--disable-metadata Disable saving prompt metadata in files. [env var: COMFYUI_DISABLE_METADATA]
--disable-all-custom-nodes
Disable loading all custom nodes. [env var: COMFYUI_DISABLE_ALL_CUSTOM_NODES]
--multi-user Enables per-user storage. [env var: COMFYUI_MULTI_USER]
--create-directories Creates the default models/, input/, output/ and temp/ directories, then exits. [env var: COMFYUI_CREATE_DIRECTORIES]
--log-stdout Send normal process output to stdout instead of stderr (default). [env var: COMFYUI_LOG_STDOUT]
--plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL
Enables server-side analytics events sent to the provided URL. [env var: COMFYUI_PLAUSIBLE_ANALYTICS_BASE_URL]
--plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN
@ -1022,15 +1031,15 @@ options:
--analytics-use-identity-provider
Uses platform identifiers for unique visitor analytics. [env var: COMFYUI_ANALYTICS_USE_IDENTITY_PROVIDER]
--distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI
EXAMPLE: "amqp://guest:guest@127.0.0.1" - Servers and clients will connect to this AMPQ URL to form a distributed queue and exchange prompt
execution requests and progress updates. [env var: COMFYUI_DISTRIBUTED_QUEUE_CONNECTION_URI]
EXAMPLE: "amqp://guest:guest@127.0.0.1" - Servers and clients will connect to this AMPQ URL to form a distributed queue and exchange prompt execution
requests and progress updates. [env var: COMFYUI_DISTRIBUTED_QUEUE_CONNECTION_URI]
--distributed-queue-worker
Workers will pull requests off the AMQP URL. [env var: COMFYUI_DISTRIBUTED_QUEUE_WORKER]
--distributed-queue-frontend
Frontends will start the web UI and connect to the provided AMQP URL to submit prompts. [env var: COMFYUI_DISTRIBUTED_QUEUE_FRONTEND]
--distributed-queue-name DISTRIBUTED_QUEUE_NAME
This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue
name, followed by a '.', then the user ID [env var: COMFYUI_DISTRIBUTED_QUEUE_NAME]
This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name,
followed by a '.', then the user ID [env var: COMFYUI_DISTRIBUTED_QUEUE_NAME]
--external-address EXTERNAL_ADDRESS
Specifies a base URL for external addresses reported by the API, such as for image paths. [env var: COMFYUI_EXTERNAL_ADDRESS]
--logging-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
@ -1044,28 +1053,41 @@ options:
--otel-service-version OTEL_SERVICE_VERSION
The version of the service or application that is generating telemetry data. [env var: OTEL_SERVICE_VERSION]
--otel-exporter-otlp-endpoint OTEL_EXPORTER_OTLP_ENDPOINT
A base endpoint URL for any signal type, with an optionally-specified port number. Helpful for when you're sending more than one signal to the
same endpoint and want one environment variable to control the endpoint. [env var: OTEL_EXPORTER_OTLP_ENDPOINT]
A base endpoint URL for any signal type, with an optionally-specified port number. Helpful for when you're sending more than one signal to the same
endpoint and want one environment variable to control the endpoint. [env var: OTEL_EXPORTER_OTLP_ENDPOINT]
--force-channels-last
Force channels last format when inferencing the models. [env var: COMFYUI_FORCE_CHANNELS_LAST]
--force-hf-local-dir-mode
Download repos from huggingface.co to the models/huggingface directory with the "local_dir" argument instead of models/huggingface_cache with
the "cache_dir" argument, recreating the traditional file structure. [env var: COMFYUI_FORCE_HF_LOCAL_DIR_MODE]
Download repos from huggingface.co to the models/huggingface directory with the "local_dir" argument instead of models/huggingface_cache with the
"cache_dir" argument, recreating the traditional file structure. [env var: COMFYUI_FORCE_HF_LOCAL_DIR_MODE]
--front-end-version FRONT_END_VERSION
Specifies the version of the frontend to be used. This command needs internet connectivity to query and download available frontend
implementations from GitHub releases. The version string should be in the format of: [repoOwner]/[repoName]@[version] where version is one of:
"latest" or a valid version number (e.g. "1.0.0") [env var: COMFYUI_FRONT_END_VERSION]
Specifies the version of the frontend to be used. This command needs internet connectivity to query and download available frontend implementations from
GitHub releases. The version string should be in the format of: [repoOwner]/[repoName]@[version] where version is one of: "latest" or a valid version
number (e.g. "1.0.0") [env var: COMFYUI_FRONT_END_VERSION]
--panic-when PANIC_WHEN
List of fully qualified exception class names to panic (sys.exit(1)) when a workflow raises it. Example: --panic-when=torch.cuda.OutOfMemoryError. Can
be specified multiple times or as a comma-separated list. [env var: COMFYUI_PANIC_WHEN]
--front-end-root FRONT_END_ROOT
The local filesystem path to the directory where the frontend is located. Overrides --front-end-version. [env var: COMFYUI_FRONT_END_ROOT]
--executor-factory EXECUTOR_FACTORY
When running ComfyUI as a distributed worker, this specifies the kind of executor that should be used to run the actual ComfyUI workflow
worker. A ThreadPoolExecutor is the default. A ProcessPoolExecutor results in better memory management, since the process will be closed and
large, contiguous blocks of CUDA memory can be freed. [env var: COMFYUI_EXECUTOR_FACTORY]
When running ComfyUI as a distributed worker, this specifies the kind of executor that should be used to run the actual ComfyUI workflow worker. A
ThreadPoolExecutor is the default. A ProcessPoolExecutor results in better memory management, since the process will be closed and large, contiguous
blocks of CUDA memory can be freed. [env var: COMFYUI_EXECUTOR_FACTORY]
--openai-api-key OPENAI_API_KEY
Configures the OpenAI API Key for the OpenAI nodes [env var: OPENAI_API_KEY]
Configures the OpenAI API Key for the OpenAI nodes. Visit https://platform.openai.com/api-keys to create this key. [env var: OPENAI_API_KEY]
--ideogram-api-key IDEOGRAM_API_KEY
Configures the Ideogram API Key for the Ideogram nodes. Visit https://ideogram.ai/manage-api to create this key. [env var: IDEOGRAM_API_KEY]
--anthropic-api-key ANTHROPIC_API_KEY
Configures the Anthropic API key for its nodes related to Claude functionality. Visit https://console.anthropic.com/settings/keys to create this key.
[env var: ANTHROPIC_API_KEY]
--user-directory USER_DIRECTORY
Set the ComfyUI user directory with an absolute path. [env var: COMFYUI_USER_DIRECTORY]
Set the ComfyUI user directory with an absolute path. Overrides --base-directory. [env var: COMFYUI_USER_DIRECTORY]
--enable-compress-response-body
Enable compressing response body. [env var: COMFYUI_ENABLE_COMPRESS_RESPONSE_BODY]
--workflows WORKFLOWS [WORKFLOWS ...]
Execute the API workflow(s) specified in the provided files. For each workflow, its outputs will be printed to a line to standard out. Application
logging will be redirected to standard error. Use `-` to signify standard in. [env var: COMFYUI_WORKFLOWS]
Args that start with '--' can also be set in a config file (config.yaml or config.json or specified via -c). Config file syntax allows: key=value, flag=true, stuff=[a,b,c] (for details, see syntax at
https://goo.gl/R74nmi). In general, command-line values override environment variables which override config file values which override defaults.
```
@ -1079,9 +1101,9 @@ There are multiple ways to use this ComfyUI package to run workflows programmati
Start ComfyUI by creating an ordinary Python object. This does not create a web server. It runs ComfyUI as a library, like any other package you are familiar with:
```python
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
async with EmbeddedComfyClient() as client:
async with Comfy() as client:
# This will run your prompt
# To get the prompt JSON, visit the ComfyUI interface, design your workflow and click **Save (API Format)**. This JSON is what you will use as your workflow.
outputs = await client.queue_prompt(prompt)

View File

@ -74,7 +74,10 @@ def setup_logger(log_level: str = 'INFO', capacity: int = 300, use_stdout: bool
logger.setLevel(log_level)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(logging.Formatter("%(message)s"))
stream_handler.setFormatter(logging.Formatter(
"%(asctime)s [%(name)s] [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
))
if use_stdout:
# Only errors and critical to stderr

View File

@ -1,18 +1,16 @@
from __future__ import annotations
import logging
import os
import sys
from importlib.metadata import entry_points
from types import ModuleType
from typing import Optional, List
from typing import Optional
import configargparse as argparse
from watchdog.observers import Observer
from . import __version__
from . import options
from .cli_args_types import LatentPreviewMethod, Configuration, ConfigurationExtender, ConfigChangeHandler, EnumAction, \
from .cli_args_types import LatentPreviewMethod, Configuration, ConfigurationExtender, EnumAction, \
EnhancedConfigArgParser, PerformanceFeature, is_valid_directory
# todo: move this
@ -104,7 +102,7 @@ def _create_parser() -> EnhancedConfigArgParser:
attn_group.add_argument("--use-quad-cross-attention", action="store_true",
help="Use the sub-quadratic cross attention optimization . Ignored when xformers is used.")
attn_group.add_argument("--use-pytorch-cross-attention", action="store_true",
help="Use the new pytorch 2.0 cross attention function.")
help="Use the new pytorch 2.0 cross attention function (default).", default=True)
attn_group.add_argument("--use-sage-attention", action="store_true", help="Use sage attention.")
attn_group.add_argument("--use-flash-attention", action="store_true", help="Use FlashAttention.")
@ -250,6 +248,8 @@ def _create_parser() -> EnhancedConfigArgParser:
parser.add_argument("--enable-compress-response-body", action="store_true", help="Enable compressing response body.")
parser.add_argument("--workflows", type=str, nargs='+', default=[], help="Execute the API workflow(s) specified in the provided files. For each workflow, its outputs will be printed to a line to standard out. Application logging will be redirected to standard error. Use `-` to signify standard in.")
# now give plugins a chance to add configuration
for entry_point in entry_points().select(group='comfyui.custom_config'):
try:
@ -288,35 +288,11 @@ def _parse_args(parser: Optional[argparse.ArgumentParser] = None, args_parsing:
configuration_obj = Configuration(**vars(args))
configuration_obj.config_files = config_files
assert all(isinstance(config_file, str) for config_file in config_files)
# we always have to set up a watcher, even when there are no existing files
if len(config_files) > 0:
_setup_config_file_watcher(configuration_obj, parser, config_files)
return configuration_obj
def _setup_config_file_watcher(config: Configuration, parser: EnhancedConfigArgParser, config_files: List[str]):
def update_config():
new_args, _, _ = parser.parse_known_args()
new_config = vars(new_args)
config.update(new_config)
handler = ConfigChangeHandler(config_files, update_config)
observer = Observer()
for config_file in config_files:
config_dir = os.path.dirname(config_file) or '.'
observer.schedule(handler, path=config_dir, recursive=False)
observer.start()
# Ensure the observer is stopped when the program exits
import atexit
atexit.register(observer.stop)
atexit.register(observer.join)
def default_configuration() -> Configuration:
return _parse_args(_create_parser())
args = _parse_args(args_parsing=options.args_parsing)
args = _parse_args(args_parsing=options.args_parsing)

View File

@ -6,7 +6,6 @@ from typing import Optional, List, Callable, Any, Union, Mapping, NamedTuple
import configargparse
import configargparse as argparse
from watchdog.events import FileSystemEventHandler
ConfigurationExtender = Callable[[argparse.ArgParser], Optional[argparse.ArgParser]]
@ -18,16 +17,6 @@ class LatentPreviewMethod(enum.Enum):
TAESD = "taesd"
class ConfigChangeHandler(FileSystemEventHandler):
def __init__(self, config_file_paths: List[str], update_callback: Callable[[], None]):
self.config_file_paths = config_file_paths
self.update_callback = update_callback
def on_modified(self, event):
if not event.is_directory and event.src_path in self.config_file_paths:
self.update_callback()
ConfigObserver = Callable[[str, Any], None]
@ -142,6 +131,7 @@ class Configuration(dict):
log_stdout (bool): Send normal process output to stdout instead of stderr (default)
panic_when (list[str]): List of fully qualified exception class names to panic (sys.exit(1)) when a workflow raises it.
enable_compress_response_body (bool): Enable compressing response body.
workflows (list[str]): Execute the API workflow(s) specified in the provided files. For each workflow, its outputs will be printed to a line to standard out. Application logging will be redirected to standard error. Use `-` to signify standard in.
"""
def __init__(self, **kwargs):
@ -235,15 +225,16 @@ class Configuration(dict):
self.otel_service_name: str = "comfyui"
self.otel_service_version: str = "0.0.1"
self.otel_exporter_otlp_endpoint: Optional[str] = None
for key, value in kwargs.items():
self[key] = value
self.executor_factory: str = "ThreadPoolExecutor"
self.openai_api_key: Optional[str] = None
self.ideogram_api_key: Optional[str] = None
self.anthropic_api_key: Optional[str] = None
self.user_directory: Optional[str] = None
self.panic_when: list[str] = []
self.workflows: list[str] = []
for key, value in kwargs.items():
self[key] = value
# this must always be last
def __getattr__(self, item):
if item not in self:
@ -287,8 +278,8 @@ class Configuration(dict):
return state
def __setstate__(self, state):
self.update(state)
self._observers = []
self.update(state)
@property
def verbose(self) -> str:

View File

@ -13,8 +13,9 @@ class FileOutput(TypedDict, total=False):
class Output(TypedDict, total=False):
latents: NotRequired[List[FileOutput]]
images: NotRequired[List[FileOutput]]
latents: NotRequired[list[FileOutput]]
images: NotRequired[list[FileOutput]]
videos: NotRequired[list[FileOutput]]
@dataclasses.dataclass

View File

@ -1,6 +1,7 @@
from __future__ import annotations
import asyncio
import copy
import gc
import json
import threading
@ -35,6 +36,7 @@ def _execute_prompt(
span_context: dict,
progress_handler: ExecutorToClientProgress | None,
configuration: Configuration | None) -> dict:
configuration = copy.deepcopy(configuration) if configuration is not None else None
execution_context = current_execution_context()
if len(execution_context.folder_names_and_paths) == 0 or configuration is not None:
init_default_paths(execution_context.folder_names_and_paths, configuration, replace_existing=True)
@ -59,7 +61,7 @@ async def __execute_prompt(
from ..cmd.execution import PromptExecutor
progress_handler = progress_handler or ServerStub()
prompt_executor: PromptExecutor = None
try:
prompt_executor: PromptExecutor = _prompt_executor.executor
except (LookupError, AttributeError):
@ -121,11 +123,9 @@ def _cleanup():
pass
class EmbeddedComfyClient:
class Comfy:
"""
Embedded client for comfy executing prompts as a library.
This client manages a single-threaded executor to run long-running or blocking tasks
This manages a single-threaded executor to run long-running or blocking workflows
asynchronously without blocking the asyncio event loop. It initializes a PromptExecutor
in a dedicated thread for executing prompts and handling server-stub communications.
Example usage:
@ -186,7 +186,17 @@ class EmbeddedComfyClient:
self._is_running = False
async def queue_prompt_api(self,
prompt: PromptDict) -> V1QueuePromptResponse:
prompt: PromptDict | str | dict) -> V1QueuePromptResponse:
"""
Queues a prompt for execution, returning the output when it is complete.
:param prompt: a PromptDict, string or dictionary containing a so-called Workflow API prompt
:return: a response of URLs for Save-related nodes and the node outputs
"""
if isinstance(prompt, str):
prompt = json.loads(prompt)
if isinstance(prompt, dict):
from comfy.api.components.schema.prompt import Prompt
prompt = Prompt.validate(prompt)
outputs = await self.queue_prompt(prompt)
return V1QueuePromptResponse(urls=[], outputs=outputs)
@ -217,3 +227,6 @@ class EmbeddedComfyClient:
finally:
with self._task_count_lock:
self._task_count -= 1
EmbeddedComfyClient = Comfy

View File

@ -1,4 +1,4 @@
def load_extra_path_config(yaml_path):
def load_extra_path_config(yaml_path, folder_names=None):
from ..extra_config import load_extra_path_config
return load_extra_path_config(yaml_path)
return load_extra_path_config(yaml_path, folder_names)

View File

@ -67,7 +67,7 @@ def init_default_paths(folder_names_and_paths: FolderNames, configuration: Optio
configuration = configuration or args
if base_paths_from_configuration:
base_paths = [Path(configuration.cwd) if configuration.cwd is not None else None] + [Path(configuration.base_directory) if configuration.base_directory is not None else None] + configuration.base_paths
base_paths = [Path(configuration.cwd) if configuration.cwd is not None else None] + [Path(configuration.base_directory) if configuration.base_directory is not None else None] + (configuration.base_paths or [])
base_paths = [Path(path) for path in base_paths if path is not None]
if len(base_paths) == 0:
base_paths = [Path(os.getcwd())]
@ -250,7 +250,7 @@ def exists_annotated_filepath(name):
return os.path.exists(filepath)
def add_model_folder_path(folder_name, full_folder_path: Optional[str] = None, extensions: Optional[set[str] | frozenset[str]] = None, is_default: bool = False) -> str:
def add_model_folder_path(folder_name, full_folder_path: Optional[str] = None, extensions: Optional[set[str] | frozenset[str]] = None, is_default: bool = False, folder_names_and_paths: Optional[FolderNames] = None) -> str:
"""
Registers a model path for the given canonical name.
:param folder_name: the folder name
@ -258,7 +258,7 @@ def add_model_folder_path(folder_name, full_folder_path: Optional[str] = None, e
:param extensions: supported file extensions
:return: the folder path
"""
folder_names_and_paths = _folder_names_and_paths()
folder_names_and_paths = folder_names_and_paths or _folder_names_and_paths()
if full_folder_path is None:
if folder_name not in folder_names_and_paths:
folder_names_and_paths.add(ModelPaths(folder_names=[folder_name], supported_extensions=set(extensions) if extensions is not None else _supported_pt_extensions()))

View File

@ -64,7 +64,8 @@ def add_model_folder_path(
folder_name: str,
full_folder_path: Optional[str] = ...,
extensions: Optional[Union[set[str], frozenset[str]]] = ...,
is_default: bool = ...
is_default: bool = ...,
folder_names_and_paths: Optional[FolderNames] = ...,
) -> str: ...

View File

@ -10,6 +10,7 @@ import time
from pathlib import Path
from typing import Optional
from comfy.component_model.entrypoints_common import configure_application_paths, executor_from_args
# main_pre must be the earliest import since it suppresses some spurious warnings
from .main_pre import args
from . import hook_breaker_ac10a0
@ -243,13 +244,20 @@ async def _start_comfyui(from_script_dir: Optional[Path] = None):
if args.quick_test_for_ci:
# for CI purposes, try importing all the nodes
import_all_nodes_in_workspace(raise_on_failure=True)
exit(0)
return
else:
# we no longer lazily load nodes. we'll do it now for the sake of creating directories
import_all_nodes_in_workspace(raise_on_failure=False)
# now that nodes are loaded, create more directories if appropriate
folder_paths.create_directories()
if len(args.workflows) > 0:
configure_application_paths(args)
executor = await executor_from_args(args)
from ..entrypoints.workflow import run_workflows
await run_workflows(executor, args.workflows)
return
# replaced my folder_paths.create_directories
call_on_start = None
if args.auto_launch:

View File

@ -7,10 +7,17 @@ Use this instead of cli_args to import the args:
It will enable command line argument parsing. If this isn't desired, you must author your own implementation of these fixes.
"""
import os
os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "1"
os.environ["TORCHINDUCTOR_AUTOGRAD_CACHE"] = "1"
os.environ["BITSANDBYTES_NOWELCOME"] = "1"
import ctypes
import importlib.util
import logging
import os
import shutil
import sys
import warnings
@ -43,6 +50,7 @@ warnings.filterwarnings("ignore", message="Importing from timm.models.registry i
warnings.filterwarnings("ignore", message="Importing from timm.models.layers is deprecated, please import via timm.layers", category=FutureWarning)
warnings.filterwarnings("ignore", message="Inheritance class _InstrumentedApplication from web.Application is discouraged", category=DeprecationWarning)
warnings.filterwarnings("ignore", message="Please import `gaussian_filter` from the `scipy.ndimage` namespace; the `scipy.ndimage.filters` namespace is deprecated", category=DeprecationWarning)
warnings.filterwarnings("ignore", message="The installed version of bitsandbytes was compiled without GPU support")
from ..cli_args import args
@ -64,11 +72,6 @@ try:
except Exception:
pass
os.environ["OPENCV_IO_ENABLE_OPENEXR"] = "1"
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
os.environ["TORCHINDUCTOR_FX_GRAPH_CACHE"] = "1"
os.environ["TORCHINDUCTOR_AUTOGRAD_CACHE"] = "1"
def _fix_pytorch_240():
"""Fixes pytorch 2.4.0"""
@ -108,13 +111,10 @@ def _create_tracer():
sampler = ProgressSpanSampler()
provider = TracerProvider(resource=resource, sampler=sampler)
is_debugging = hasattr(sys, 'gettrace') and sys.gettrace() is not None
has_endpoint = args.otel_exporter_otlp_endpoint is not None
if has_endpoint:
otlp_exporter = OTLPSpanExporter()
# elif is_debugging:
# otlp_exporter = ConsoleSpanExporter("comfyui")
else:
otlp_exporter = SpanExporter()
@ -133,8 +133,8 @@ def _create_tracer():
def _configure_logging():
logging_level = args.logging_level
if args.distributed_queue_worker or args.distributed_queue_frontend or args.distributed_queue_connection_uri is not None:
logging.basicConfig(level=logging_level)
if len(args.workflows) > 0 or args.distributed_queue_worker or args.distributed_queue_frontend or args.distributed_queue_connection_uri is not None:
logging.basicConfig(level=logging_level, stream=sys.stderr)
else:
logger.setup_logger(logging_level)

View File

@ -1,68 +0,0 @@
import asyncio
import itertools
import logging
import os
from .extra_model_paths import load_extra_path_config
from .main_pre import args
from ..distributed.executors import ContextVarExecutor, ContextVarProcessPoolExecutor
async def main():
# assume we are a worker
args.distributed_queue_worker = True
args.distributed_queue_frontend = False
assert args.distributed_queue_connection_uri is not None, "Set the --distributed-queue-connection-uri argument to your RabbitMQ server"
# configure paths
if args.output_directory:
output_dir = os.path.abspath(args.output_directory)
logging.info(f"Setting output directory to: {output_dir}")
from ..cmd import folder_paths
folder_paths.set_output_directory(output_dir)
if args.input_directory:
input_dir = os.path.abspath(args.input_directory)
logging.info(f"Setting input directory to: {input_dir}")
from ..cmd import folder_paths
folder_paths.set_input_directory(input_dir)
if args.temp_directory:
temp_dir = os.path.abspath(args.temp_directory)
logging.info(f"Setting temp directory to: {temp_dir}")
from ..cmd import folder_paths
folder_paths.set_temp_directory(temp_dir)
if args.extra_model_paths_config:
for config_path in itertools.chain(*args.extra_model_paths_config):
load_extra_path_config(config_path)
from ..distributed.distributed_prompt_worker import DistributedPromptWorker
if args.executor_factory in ("ThreadPoolExecutor", "ContextVarExecutor"):
executor = ContextVarExecutor()
elif args.executor_factory in ("ProcessPoolExecutor", "ContextVarProcessPoolExecutor"):
executor = ContextVarProcessPoolExecutor()
else:
# default executor
executor = ContextVarExecutor()
async with DistributedPromptWorker(connection_uri=args.distributed_queue_connection_uri,
queue_name=args.distributed_queue_name,
executor=executor):
stop = asyncio.Event()
try:
await stop.wait()
except asyncio.CancelledError:
pass
def entrypoint():
asyncio.run(main())
if __name__ == "__main__":
entrypoint()

View File

@ -0,0 +1,41 @@
import asyncio
try:
from collections.abc import Buffer
except ImportError:
from typing_extensions import Buffer
from io import BytesIO
from typing import Literal, AsyncGenerator
import ijson
import aiofiles
import sys
import shlex
async def stream_json_objects(source_path_or_stdin: str | Literal["-"]) -> AsyncGenerator[dict, None]:
"""
Asynchronously yields JSON objects from a given source.
The source can be a file path or "-" for stdin.
Assumes the input stream contains concatenated JSON objects (e.g., {}{}{}).
"""
if source_path_or_stdin is None or len(source_path_or_stdin) == 0:
return
elif source_path_or_stdin == "-":
async for obj in ijson.items_async(aiofiles.stdin_bytes, '', multiple_values=True):
yield obj
else:
# Handle file path or literal JSON
if "{" in source_path_or_stdin[:2]:
# literal string
encode: Buffer = source_path_or_stdin.encode("utf-8")
source_path_or_stdin = BytesIO(encode)
for obj in ijson.items(source_path_or_stdin, '', multiple_values=True):
yield obj
else:
async with aiofiles.open(source_path_or_stdin, mode='rb') as f:
# 'rb' mode is important as ijson expects byte streams.
# The prefix '' targets root-level objects.
# multiple_values=True allows parsing of multiple top-level JSON values.
async for obj in ijson.items_async(f, '', multiple_values=True):
yield obj

View File

@ -0,0 +1,41 @@
from typing import Optional
from ..cli_args_types import Configuration
from ..cmd.extra_model_paths import load_extra_path_config
from .folder_path_types import FolderNames
from ..component_model.platform_path import construct_path
import itertools
import os
from ..distributed.executors import ContextVarExecutor, ContextVarProcessPoolExecutor
def configure_application_paths(args: Configuration, folder_names: Optional[FolderNames] = None):
if folder_names is None:
from ..cmd import folder_paths
folder_names = folder_paths.folder_names_and_paths
# configure paths
if args.output_directory:
folder_names.application_paths.output_directory = construct_path(args.output_directory)
if args.input_directory:
folder_names.application_paths.input_directory = construct_path(args.input_directory)
if args.temp_directory:
folder_names.application_paths.temp_directory = construct_path(args.temp_directory)
if args.extra_model_paths_config:
for config_path in itertools.chain(*args.extra_model_paths_config):
load_extra_path_config(config_path, folder_names=folder_names)
async def executor_from_args(configuration:Optional[Configuration]=None):
if configuration is None:
from ..cli_args import args
configuration = args
if configuration.executor_factory in ("ThreadPoolExecutor", "ContextVarExecutor"):
executor = ContextVarExecutor()
elif configuration.executor_factory in ("ProcessPoolExecutor", "ContextVarProcessPoolExecutor"):
executor = ContextVarProcessPoolExecutor()
else:
# default executor
executor = ContextVarExecutor()
return executor

View File

@ -0,0 +1,14 @@
import contextlib
import io
import sys
@contextlib.contextmanager
def suppress_stdout_stderr():
new_stdout, new_stderr = io.StringIO(), io.StringIO()
old_stdout, old_stderr = sys.stdout, sys.stderr
try:
sys.stdout, sys.stderr = new_stdout, new_stderr
yield
finally:
sys.stdout, sys.stderr = old_stdout, old_stderr

View File

@ -14,7 +14,7 @@ from .executors import ContextVarExecutor
from .distributed_progress import DistributedExecutorToClientProgress
from .distributed_types import RpcRequest, RpcReply
from .process_pool_executor import ProcessPoolExecutor
from ..client.embedded_comfy_client import EmbeddedComfyClient
from ..client.embedded_comfy_client import Comfy
from ..cmd.main_pre import tracer
from ..component_model.queue_types import ExecutionStatus
@ -24,7 +24,7 @@ class DistributedPromptWorker:
A distributed prompt worker.
"""
def __init__(self, embedded_comfy_client: Optional[EmbeddedComfyClient] = None,
def __init__(self, embedded_comfy_client: Optional[Comfy] = None,
connection_uri: str = "amqp://localhost:5672/",
queue_name: str = "comfyui",
health_check_port: int = 9090,
@ -124,7 +124,7 @@ class DistributedPromptWorker:
self._rpc = await JsonRPC.create(channel=self._channel, auto_delete=True, durable=False)
if self._embedded_comfy_client is None:
self._embedded_comfy_client = EmbeddedComfyClient(progress_handler=DistributedExecutorToClientProgress(self._rpc, self._queue_name, self._loop), executor=self._executor)
self._embedded_comfy_client = Comfy(progress_handler=DistributedExecutorToClientProgress(self._rpc, self._queue_name, self._loop), executor=self._executor)
if not self._embedded_comfy_client.is_running:
await self._exit_stack.enter_async_context(self._embedded_comfy_client)

View File

@ -1,12 +1,10 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import Tuple, Literal, List, Callable
from typing import Tuple, Literal, List
from ..api.components.schema.prompt import PromptDict, Prompt
from ..auth.permissions import ComfyJwt, jwt_decode
from ..cli_args_types import Configuration
from ..component_model.executor_types import ExecutorToClientProgress
from ..component_model.queue_types import NamedQueueTuple, TaskInvocation, ExecutionStatus

View File

View File

@ -0,0 +1,34 @@
import asyncio
from ..cmd.main_pre import args
from ..component_model.entrypoints_common import configure_application_paths, executor_from_args
from ..distributed.executors import ContextVarExecutor, ContextVarProcessPoolExecutor
async def main():
# assume we are a worker
from ..distributed.distributed_prompt_worker import DistributedPromptWorker
args.distributed_queue_worker = True
args.distributed_queue_frontend = False
assert args.distributed_queue_connection_uri is not None, "Set the --distributed-queue-connection-uri argument to your RabbitMQ server"
configure_application_paths(args)
executor = await executor_from_args(args)
async with DistributedPromptWorker(connection_uri=args.distributed_queue_connection_uri,
queue_name=args.distributed_queue_name,
executor=executor):
stop = asyncio.Event()
try:
await stop.wait()
except asyncio.CancelledError:
pass
def entrypoint():
asyncio.run(main())
if __name__ == "__main__":
entrypoint()

View File

@ -0,0 +1,46 @@
import asyncio
import json
import logging
from typing import Optional, Literal
import typer
from ..cmd.main_pre import args
from ..cli_args_types import Configuration
from ..component_model.asyncio_files import stream_json_objects
from ..client.embedded_comfy_client import Comfy
from ..component_model.entrypoints_common import configure_application_paths, executor_from_args
logger = logging.getLogger(__name__)
async def main():
workflows = args.workflows
assert len(workflows) > 0, "specify at least one path to a workflow, a literal workflow json starting with `{` or `-` (for standard in) using --workflows cli arg"
configure_application_paths(args)
executor = await executor_from_args(args)
await run_workflows(executor, workflows)
async def run_workflows(executor, workflows: list[str | Literal["-"]], configuration: Optional[Configuration] = None):
if configuration is None:
configuration = args
async with Comfy(executor=executor, configuration=configuration) as comfy:
for workflow in workflows:
obj: dict
async for obj in stream_json_objects(workflow):
try:
res = await comfy.queue_prompt_api(obj)
typer.echo(json.dumps(res.outputs))
except asyncio.CancelledError:
logger.info("Exiting gracefully.")
break
def entrypoint():
asyncio.run(main())
if __name__ == "__main__":
entrypoint()

View File

@ -1,4 +1,5 @@
import importlib
import sys
def import_exception_class(fqn: str):
@ -49,7 +50,7 @@ def should_panic_on_exception(exc: Exception, panic_classes: list[str]) -> bool:
exception_types = [import_exception_class(name)
for name in expanded_classes if name]
except ValueError as e:
print(f"Warning: {str(e)}")
print(f"Warning: {str(e)}", file=sys.stderr)
return False
# Check if exception matches any of the specified types

View File

@ -1,12 +1,13 @@
import logging
import os
from typing import Optional
import yaml
from .component_model.folder_path_types import FolderNames
def load_extra_path_config(yaml_path):
def load_extra_path_config(yaml_path, folder_names: Optional[FolderNames] = None):
from .cmd import folder_paths
with open(yaml_path, 'r', encoding='utf-8') as stream:
config = yaml.safe_load(stream)
yaml_dir = os.path.dirname(os.path.abspath(yaml_path))
@ -34,4 +35,4 @@ def load_extra_path_config(yaml_path):
full_path = os.path.abspath(os.path.join(yaml_dir, y))
normalized_path = os.path.normpath(full_path)
logging.info("Adding extra search path {} {}".format(x, normalized_path))
folder_paths.add_model_folder_path(x, normalized_path, is_default=is_default)
folder_paths.add_model_folder_path(x, normalized_path, is_default=is_default, folder_names_and_paths=folder_names)

View File

@ -600,23 +600,23 @@ def attention_flash(q, k, v, heads, mask=None, attn_precision=None, skip_reshape
optimized_attention = attention_basic
if model_management.sage_attention_enabled():
logger.info("Using sage attention")
logger.debug("Using sage attention")
optimized_attention = attention_sage
elif model_management.xformers_enabled():
logger.info("Using xformers attention")
logger.debug("Using xformers attention")
optimized_attention = attention_xformers
elif model_management.flash_attention_enabled():
logging.info("Using Flash Attention")
logging.debug("Using Flash Attention")
optimized_attention = attention_flash
elif model_management.pytorch_attention_enabled():
logger.info("Using pytorch attention")
logger.debug("Using pytorch attention")
optimized_attention = attention_pytorch
else:
if args.use_split_cross_attention:
logger.info("Using split optimization for attention")
logger.debug("Using split optimization for attention")
optimized_attention = attention_split
else:
logger.info("Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention")
logger.debug("Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention")
optimized_attention = attention_sub_quad
optimized_attention_masked = optimized_attention

View File

@ -263,7 +263,7 @@ try:
logger.debug("pytorch version: {}".format(torch_version))
mac_ver = mac_version()
if mac_ver is not None:
logging.info("Mac Version {}".format(mac_ver))
logger.debug("Mac Version {}".format(mac_ver))
except:
pass
@ -343,7 +343,7 @@ except:
try:
if is_amd():
arch = torch.cuda.get_device_properties(get_torch_device()).gcnArchName
logging.info("AMD arch: {}".format(arch))
logger.info("AMD arch: {}".format(arch))
if args.use_split_cross_attention == False and args.use_quad_cross_attention == False:
if torch_version_numeric[0] >= 2 and torch_version_numeric[1] >= 7: # works on 2.6 but doesn't actually seem to improve much
if any((a in arch) for a in ["gfx1100", "gfx1101"]): # TODO: more arches
@ -361,7 +361,7 @@ try:
if is_nvidia() and PerformanceFeature.Fp16Accumulation in args.fast:
torch.backends.cuda.matmul.allow_fp16_accumulation = True
PRIORITIZE_FP16 = True # TODO: limit to cards where it actually boosts performance
logging.info("Enabled fp16 accumulation.")
logger.info("Enabled fp16 accumulation.")
except:
pass
@ -369,7 +369,7 @@ try:
if torch_version_numeric[0] == 2 and torch_version_numeric[1] >= 5:
torch.backends.cuda.allow_fp16_bf16_reduction_math_sdp(True) # pylint: disable=no-member
except:
logging.warning("Warning, could not set allow_fp16_bf16_reduction_math_sdp")
logger.warning("Warning, could not set allow_fp16_bf16_reduction_math_sdp")
if args.lowvram:
set_vram_to = VRAMState.LOW_VRAM
@ -386,8 +386,8 @@ if args.force_fp32:
logger.info("Forcing FP32, if this improves things please report it.")
FORCE_FP32 = True
if args.force_fp16 or cpu_state == CPUState.MPS:
logger.info("Forcing FP16.")
if args.force_fp16:
logger.debug("Forcing FP16.")
FORCE_FP16 = True
if args.force_bf16:
@ -1256,7 +1256,7 @@ def should_use_fp16(device=None, model_params=0, prioritize_performance=True, ma
return True
if (device is not None and is_device_mps(device)) or mps_mode():
return True
return not bfloat16_support_mps(device)
if cpu_mode():
return False
@ -1327,9 +1327,7 @@ def should_use_bf16(device=None, model_params=0, prioritize_performance=True, ma
return False
if (device is not None and is_device_mps(device)) or mps_mode():
if mac_version() < (14,):
return False
return True
return bfloat16_support_mps(device)
if cpu_mode():
return False
@ -1368,6 +1366,19 @@ def should_use_bf16(device=None, model_params=0, prioritize_performance=True, ma
return False
def bfloat16_support_mps(device):
# test bfloat 16
try:
x = torch.ones(1, dtype=torch.bfloat16, device=device)
x = x + 1.0
_ = repr(x)
supported = True
del x
except:
supported = False
return supported
def supports_fp8_compute(device=None):
if not is_nvidia():
return False

View File

@ -30,7 +30,7 @@ def _vanilla_load_importing_execute_prestartup_script(node_paths: Iterable[str])
spec.loader.exec_module(module)
return True
except Exception as e:
print(f"Failed to execute startup-script: {script_path} / {e}")
print(f"Failed to execute startup-script: {script_path} / {e}", file=sys.stderr)
return False
node_prestartup_times = []
@ -52,14 +52,14 @@ def _vanilla_load_importing_execute_prestartup_script(node_paths: Iterable[str])
success = execute_script(script_path)
node_prestartup_times.append((time.perf_counter() - time_before, module_path, success))
if len(node_prestartup_times) > 0:
print("\nPrestartup times for custom nodes:")
print("\nPrestartup times for custom nodes:", file=sys.stderr)
for n in sorted(node_prestartup_times):
if n[2]:
import_message = ""
else:
import_message = " (PRESTARTUP FAILED)"
print("{:6.1f} seconds{}:".format(n[0], import_message), n[1])
print()
print("{:6.1f} seconds{}:".format(n[0], import_message), n[1], file=sys.stderr)
print("\n", file=sys.stderr)
@contextmanager
@ -118,12 +118,12 @@ def _vanilla_load_custom_nodes_1(module_path, ignore=set()) -> ExportedNodes:
exported_nodes.NODE_DISPLAY_NAME_MAPPINGS.update(module.NODE_DISPLAY_NAME_MAPPINGS)
return exported_nodes
else:
print(f"Skip {module_path} module for custom nodes due to the lack of NODE_CLASS_MAPPINGS.")
print(f"Skip {module_path} module for custom nodes due to the lack of NODE_CLASS_MAPPINGS.", file=sys.stderr)
return exported_nodes
except Exception as e:
import traceback
print(traceback.format_exc())
print(f"Cannot import {module_path} module for custom nodes:", e)
print(f"Cannot import {module_path} module for custom nodes:", e, file=sys.stderr)
return exported_nodes
@ -151,14 +151,14 @@ def _vanilla_load_custom_nodes_2(node_paths: Iterable[str]) -> ExportedNodes:
exported_nodes.update(possible_exported_nodes)
if len(node_import_times) > 0:
print("\nImport times for custom nodes:")
print("\nImport times for custom nodes:", file=sys.stderr)
for n in sorted(node_import_times):
if n[2]:
import_message = ""
else:
import_message = " (IMPORT FAILED)"
print("{:6.1f} seconds{}:".format(n[0], import_message), n[1])
print()
print("{:6.1f} seconds{}:".format(n[0], import_message), n[1], file=sys.stderr)
print("\n", file=sys.stderr)
return exported_nodes

View File

@ -535,24 +535,24 @@ if __name__ == "__main__":
"site_data_dir",
"site_config_dir")
print("-- app dirs %s --" % __version__)
print("-- app dirs %s --" % __version__, file=sys.stderr)
print("-- app dirs (with optional 'version')")
print("-- app dirs (with optional 'version')", file=sys.stderr)
dirs = AppDirs(appname, appauthor, version="1.0")
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("\n-- app dirs (without optional 'version')")
print("\n-- app dirs (without optional 'version')", file=sys.stderr)
dirs = AppDirs(appname, appauthor)
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("%s: %s" % (prop, getattr(dirs, prop)), file=sys.stderr)
print("\n-- app dirs (without optional 'appauthor')")
print("\n-- app dirs (without optional 'appauthor')", file=sys.stderr)
dirs = AppDirs(appname)
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("%s: %s" % (prop, getattr(dirs, prop)), file=sys.stderr)
print("\n-- app dirs (with disabled 'appauthor')")
print("\n-- app dirs (with disabled 'appauthor')", file=sys.stderr)
dirs = AppDirs(appname, appauthor=False)
for prop in props:
print("%s: %s" % (prop, getattr(dirs, prop)))
print("%s: %s" % (prop, getattr(dirs, prop)), file=sys.stderr)

View File

@ -1,8 +1,11 @@
import dataclasses
from typing import Any
from comfy.component_model.suppress_stdout import suppress_stdout_stderr
try:
import bitsandbytes as bnb
with suppress_stdout_stderr():
import bitsandbytes as bnb
from bitsandbytes.nn.modules import Params4bit, QuantState
has_bitsandbytes = True

View File

@ -136,8 +136,8 @@ async def main():
# configuration.cwd = os.path.dirname(__file__)
configuration = Configuration()
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
async with EmbeddedComfyClient(configuration=configuration) as client:
from comfy.client.embedded_comfy_client import Comfy
async with Comfy(configuration=configuration) as client:
# This will run your prompt
outputs = await client.queue_prompt(prompt)

View File

@ -96,6 +96,8 @@ dependencies = [
"jax",
"colour",
"av",
"typer",
"ijson",
]
[build-system]
@ -157,7 +159,8 @@ withtriton = ["comfyui[cuda, triton]"] # Depends on 'cuda' and 'triton' extras
[project.scripts]
comfyui = "comfy.cmd.main:entrypoint"
comfyui-worker = "comfy.cmd.worker:entrypoint"
comfyui-worker = "comfy.entrypoints.worker:entrypoint"
comfyui-workflow = "comfy.entrypoints.workflow:entrypoint"
[project.urls]
Homepage = "https://github.com/comfyanonymous/ComfyUI" # Example

View File

@ -12,7 +12,7 @@ from aiohttp import ClientSession
from testcontainers.rabbitmq import RabbitMqContainer
from comfy.client.aio_client import AsyncRemoteComfyClient
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
from comfy.client.sdxl_with_refiner_workflow import sdxl_workflow_with_refiner
from comfy.component_model.executor_types import Executor
from comfy.component_model.make_mutable import make_mutable
@ -86,7 +86,7 @@ async def test_distributed_prompt_queues_same_process():
assert incoming is not None
incoming_named = NamedQueueTuple(incoming)
assert incoming_named.prompt_id == incoming_prompt_id
async with EmbeddedComfyClient() as embedded_comfy_client:
async with Comfy() as embedded_comfy_client:
outputs = await embedded_comfy_client.queue_prompt(incoming_named.prompt,
incoming_named.prompt_id)
worker.task_done(incoming_named.prompt_id, outputs, ExecutionStatus("success", True, []))

View File

@ -9,7 +9,7 @@ from PIL import Image
from pytest import fixture
from comfy.cli_args import default_configuration
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
from comfy.component_model.executor_types import SendSyncEvent, SendSyncData, ExecutingMessage, ExecutionErrorMessage, \
DependencyCycleError
from comfy.distributed.server_stub import ServerStub
@ -62,7 +62,7 @@ class _ProgressHandler(ServerStub):
class ComfyClient:
def __init__(self, embedded_client: EmbeddedComfyClient, progress_handler: _ProgressHandler):
def __init__(self, embedded_client: Comfy, progress_handler: _ProgressHandler):
self.embedded_client = embedded_client
self.progress_handler = progress_handler
@ -116,7 +116,7 @@ class TestExecution:
configuration.cache_lru = lru_size
progress_handler = _ProgressHandler()
with context_add_custom_nodes(ExportedNodes(NODE_CLASS_MAPPINGS=NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS=NODE_DISPLAY_NAME_MAPPINGS)):
async with EmbeddedComfyClient(configuration, progress_handler=progress_handler) as embedded_client:
async with Comfy(configuration, progress_handler=progress_handler) as embedded_client:
yield ComfyClient(embedded_client, progress_handler)
@fixture

View File

@ -5,7 +5,7 @@ from importlib.abc import Traversable
import pytest
from comfy.api.components.schema.prompt import Prompt
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
from comfy.model_downloader import add_known_models, KNOWN_LORAS
from comfy.model_downloader_types import CivitFile, HuggingFile
from comfy_extras.nodes.nodes_audio import TorchAudioNotFoundError
@ -14,8 +14,8 @@ from . import workflows
@pytest.fixture(scope="module", autouse=False)
async def client(tmp_path_factory) -> EmbeddedComfyClient:
async with EmbeddedComfyClient() as client:
async def client(tmp_path_factory) -> Comfy:
async with Comfy() as client:
yield client
@ -28,7 +28,7 @@ def _prepare_for_workflows() -> dict[str, Traversable]:
@pytest.mark.asyncio
@pytest.mark.parametrize("workflow_name, workflow_file", _prepare_for_workflows().items())
async def test_workflow(workflow_name: str, workflow_file: Traversable, has_gpu: bool, client: EmbeddedComfyClient):
async def test_workflow(workflow_name: str, workflow_file: Traversable, has_gpu: bool, client: Comfy):
if not has_gpu:
pytest.skip("requires gpu")

View File

@ -6,7 +6,7 @@ import pytest
from comfy.api.components.schema.prompt import Prompt
from comfy.cli_args_types import Configuration
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
_TEST_WORKFLOW = {
"0": {
@ -29,7 +29,7 @@ async def test_respect_cwd_param():
from comfy.cmd.folder_paths import models_dir
assert os.path.commonpath([os.getcwd(), models_dir]) == os.getcwd(), "at the time models_dir is accessed, the cwd should be the actual cwd, since there is no other configuration"
client = EmbeddedComfyClient(config)
client = Comfy(config)
prompt = Prompt.validate(_TEST_WORKFLOW)
outputs = await client.queue_prompt_api(prompt)
path_as_imported = outputs.outputs["0"]["path"][0]

View File

@ -4,7 +4,7 @@ import pytest
import torch
from comfy.cli_args_types import Configuration
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
from comfy.client.sdxl_with_refiner_workflow import sdxl_workflow_with_refiner
@ -16,7 +16,7 @@ async def test_cuda_memory_usage():
device = torch.device("cuda")
starting_memory = torch.cuda.memory_allocated(device)
async with EmbeddedComfyClient() as client:
async with Comfy() as client:
prompt = sdxl_workflow_with_refiner("test")
outputs = await client.queue_prompt(prompt)
assert outputs["13"]["images"][0]["abs_path"] is not None
@ -29,7 +29,7 @@ async def test_cuda_memory_usage():
@pytest.mark.asyncio
async def test_embedded_comfy():
async with EmbeddedComfyClient() as client:
async with Comfy() as client:
prompt = sdxl_workflow_with_refiner("test")
outputs = await client.queue_prompt(prompt)
assert outputs["13"]["images"][0]["abs_path"] is not None
@ -37,14 +37,14 @@ async def test_embedded_comfy():
@pytest.mark.asyncio
async def test_configuration_options():
config = Configuration()
async with EmbeddedComfyClient(configuration=config) as client:
async with Comfy(configuration=config) as client:
prompt = sdxl_workflow_with_refiner("test")
outputs = await client.queue_prompt(prompt)
assert outputs["13"]["images"][0]["abs_path"] is not None
@pytest.mark.asyncio
async def test_multithreaded_comfy():
async with EmbeddedComfyClient(max_workers=2) as client:
async with Comfy(max_workers=2) as client:
prompt = sdxl_workflow_with_refiner("test")
outputs_iter = await asyncio.gather(*[client.queue_prompt(prompt) for _ in range(4)])
assert all(outputs["13"]["images"][0]["abs_path"] is not None for outputs in outputs_iter)

View File

@ -6,7 +6,7 @@ import pytest
import torch
from comfy.cli_args_types import Configuration
from comfy.client.embedded_comfy_client import EmbeddedComfyClient
from comfy.client.embedded_comfy_client import Comfy
from comfy.component_model.make_mutable import make_mutable
from comfy.component_model.tensor_types import RGBImageBatch
from comfy.distributed.executors import ContextVarExecutor
@ -153,7 +153,7 @@ async def test_panic_on_exception_with_executor(executor_cls, executor_kwargs):
NODE_DISPLAY_NAME_MAPPINGS=TEST_NODE_DISPLAY_NAME_MAPPINGS)),
patch('sys.exit') as mock_exit):
try:
async with EmbeddedComfyClient(configuration=config, executor=executor) as client:
async with Comfy(configuration=config, executor=executor) as client:
# Queue our failing workflow
await client.queue_prompt(create_failing_workflow())
except SystemExit:
@ -188,7 +188,7 @@ async def test_no_panic_when_disabled_with_executor(executor_cls, executor_kwarg
NODE_DISPLAY_NAME_MAPPINGS=TEST_NODE_DISPLAY_NAME_MAPPINGS)),
patch('sys.exit') as mock_exit):
try:
async with EmbeddedComfyClient(configuration=config, executor=executor) as client:
async with Comfy(configuration=config, executor=executor) as client:
# Queue our failing workflow
await client.queue_prompt(create_failing_workflow())
except SystemExit:
@ -213,7 +213,7 @@ async def test_executor_cleanup(executor_cls, executor_kwargs):
with context_add_custom_nodes(ExportedNodes(NODE_CLASS_MAPPINGS=TEST_NODE_CLASS_MAPPINGS,
NODE_DISPLAY_NAME_MAPPINGS=TEST_NODE_DISPLAY_NAME_MAPPINGS)):
async with EmbeddedComfyClient(executor=executor) as client:
async with Comfy(executor=executor) as client:
# Create a simple workflow that doesn't raise
workflow = create_failing_workflow()
workflow["1"]["inputs"]["should_raise"] = False
@ -235,7 +235,7 @@ async def test_parallel_execution(executor_cls, executor_kwargs):
with context_add_custom_nodes(ExportedNodes(NODE_CLASS_MAPPINGS=TEST_NODE_CLASS_MAPPINGS,
NODE_DISPLAY_NAME_MAPPINGS=TEST_NODE_DISPLAY_NAME_MAPPINGS)):
async with EmbeddedComfyClient(executor=executor) as client:
async with Comfy(executor=executor) as client:
# Create multiple non-failing workflows
workflow = create_failing_workflow()
workflow["1"]["inputs"]["should_raise"] = False