Update docs for configurationg

This commit is contained in:
doctorpangloss 2024-05-24 11:27:59 -07:00
parent a00dbe95ef
commit 10bba729cb
2 changed files with 103 additions and 165 deletions

260
README.md
View File

@ -501,213 +501,151 @@ Ctrl can also be replaced with Cmd instead for macOS users
# Configuration # Configuration
This supports configuration with command line arguments, the environment and a configuration file.
## Configuration File
First, run `comfyui --help` for all supported configuration and arguments.
Args that start with `--` can also be set in a config file (`config.yaml` or `config.json` or specified via `-c`). Config file syntax allows: `key=value`, `flag=true`, `stuff=[a,b,c]` (for details, see syntax [here](https://goo.gl/R74nmi)). In general, command-line values override environment variables which override config file values which override defaults.
## Extra Model Paths ## Extra Model Paths
Copy [docs/examples/configuration/extra_model_paths.yaml](docs/examples/configuration/extra_model_paths.yaml) to your working directory, and modify the folder paths to match your folder structure.
You can pass additional extra model path configurations with one or more copies of `--extra-model-paths-config=some_configuration.yaml`.
### Command Line Arguments ### Command Line Arguments
``` ```
usage: comfyui.exe [-h] [-c CONFIG_FILE] usage: comfyui.exe [-h] [-c CONFIG_FILE] [--write-out-config-file CONFIG_OUTPUT_PATH] [-w CWD] [-H [IP]] [--port PORT] [--enable-cors-header [ORIGIN]] [--max-upload-size MAX_UPLOAD_SIZE]
[--write-out-config-file CONFIG_OUTPUT_PATH] [-w CWD] [--extra-model-paths-config PATH [PATH ...]] [--output-directory OUTPUT_DIRECTORY] [--temp-directory TEMP_DIRECTORY] [--input-directory INPUT_DIRECTORY] [--auto-launch]
[-H [IP]] [--port PORT] [--enable-cors-header [ORIGIN]] [--disable-auto-launch] [--cuda-device DEVICE_ID] [--cuda-malloc | --disable-cuda-malloc] [--force-fp32 | --force-fp16 | --force-bf16]
[--max-upload-size MAX_UPLOAD_SIZE] [--bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet] [--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae]
[--extra-model-paths-config PATH [PATH ...]] [--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc] [--directml [DIRECTML_DEVICE]] [--disable-ipex-optimize]
[--output-directory OUTPUT_DIRECTORY] [--preview-method [none,auto,latent2rgb,taesd]] [--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention] [--disable-xformers]
[--temp-directory TEMP_DIRECTORY] [--force-upcast-attention | --dont-upcast-attention] [--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu] [--disable-smart-memory] [--deterministic]
[--input-directory INPUT_DIRECTORY] [--auto-launch] [--dont-print-server] [--quick-test-for-ci] [--windows-standalone-build] [--disable-metadata] [--multi-user] [--create-directories]
[--disable-auto-launch] [--cuda-device DEVICE_ID] [--plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL] [--plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN] [--analytics-use-identity-provider]
[--cuda-malloc | --disable-cuda-malloc] [--distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI] [--distributed-queue-worker] [--distributed-queue-frontend] [--distributed-queue-name DISTRIBUTED_QUEUE_NAME]
[--dont-upcast-attention] [--force-fp32 | --force-fp16] [--external-address EXTERNAL_ADDRESS] [--verbose] [--disable-known-models] [--max-queue-size MAX_QUEUE_SIZE] [--otel-service-name OTEL_SERVICE_NAME]
[--bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet] [--otel-service-version OTEL_SERVICE_VERSION] [--otel-exporter-otlp-endpoint OTEL_EXPORTER_OTLP_ENDPOINT]
[--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae]
[--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc]
[--directml [DIRECTML_DEVICE]] [--disable-ipex-optimize]
[--preview-method [none,auto,latent2rgb,taesd]]
[--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention]
[--disable-xformers]
[--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu]
[--disable-smart-memory] [--deterministic]
[--dont-print-server] [--quick-test-for-ci]
[--windows-standalone-build] [--disable-metadata]
[--multi-user] [--create-directories]
[--plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL]
[--plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN]
[--analytics-use-identity-provider]
[--distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI]
[--distributed-queue-worker] [--distributed-queue-frontend]
[--distributed-queue-name DISTRIBUTED_QUEUE_NAME]
options: options:
-h, --help show this help message and exit -h, --help show this help message and exit
-c CONFIG_FILE, --config CONFIG_FILE -c CONFIG_FILE, --config CONFIG_FILE
config file path config file path
--write-out-config-file CONFIG_OUTPUT_PATH --write-out-config-file CONFIG_OUTPUT_PATH
takes the current command line args and writes them takes the current command line args and writes them out to a config file at the given path, then exits
out to a config file at the given path, then exits -w CWD, --cwd CWD Specify the working directory. If not set, this is the current working directory. models/, input/, output/ and other directories will be located here by default. [env var:
-w CWD, --cwd CWD Specify the working directory. If not set, this is the COMFYUI_CWD]
current working directory. models/, input/, output/
and other directories will be located here by default.
[env var: COMFYUI_CWD]
-H [IP], --listen [IP] -H [IP], --listen [IP]
Specify the IP address to listen on (default: Specify the IP address to listen on (default: 127.0.0.1). If --listen is provided without an argument, it defaults to 0.0.0.0. (listens on all) [env var: COMFYUI_LISTEN]
127.0.0.1). If --listen is provided without an
argument, it defaults to 0.0.0.0. (listens on all)
[env var: COMFYUI_LISTEN]
--port PORT Set the listen port. [env var: COMFYUI_PORT] --port PORT Set the listen port. [env var: COMFYUI_PORT]
--enable-cors-header [ORIGIN] --enable-cors-header [ORIGIN]
Enable CORS (Cross-Origin Resource Sharing) with Enable CORS (Cross-Origin Resource Sharing) with optional origin or allow all with default '*'. [env var: COMFYUI_ENABLE_CORS_HEADER]
optional origin or allow all with default '*'. [env
var: COMFYUI_ENABLE_CORS_HEADER]
--max-upload-size MAX_UPLOAD_SIZE --max-upload-size MAX_UPLOAD_SIZE
Set the maximum upload size in MB. [env var: Set the maximum upload size in MB. [env var: COMFYUI_MAX_UPLOAD_SIZE]
COMFYUI_MAX_UPLOAD_SIZE]
--extra-model-paths-config PATH [PATH ...] --extra-model-paths-config PATH [PATH ...]
Load one or more extra_model_paths.yaml files. [env Load one or more extra_model_paths.yaml files. [env var: COMFYUI_EXTRA_MODEL_PATHS_CONFIG]
var: COMFYUI_EXTRA_MODEL_PATHS_CONFIG]
--output-directory OUTPUT_DIRECTORY --output-directory OUTPUT_DIRECTORY
Set the ComfyUI output directory. [env var: Set the ComfyUI output directory. [env var: COMFYUI_OUTPUT_DIRECTORY]
COMFYUI_OUTPUT_DIRECTORY]
--temp-directory TEMP_DIRECTORY --temp-directory TEMP_DIRECTORY
Set the ComfyUI temp directory (default is in the Set the ComfyUI temp directory (default is in the ComfyUI directory). [env var: COMFYUI_TEMP_DIRECTORY]
ComfyUI directory). [env var: COMFYUI_TEMP_DIRECTORY]
--input-directory INPUT_DIRECTORY --input-directory INPUT_DIRECTORY
Set the ComfyUI input directory. [env var: Set the ComfyUI input directory. [env var: COMFYUI_INPUT_DIRECTORY]
COMFYUI_INPUT_DIRECTORY] --auto-launch Automatically launch ComfyUI in the default browser. [env var: COMFYUI_AUTO_LAUNCH]
--auto-launch Automatically launch ComfyUI in the default browser.
[env var: COMFYUI_AUTO_LAUNCH]
--disable-auto-launch --disable-auto-launch
Disable auto launching the browser. [env var: Disable auto launching the browser. [env var: COMFYUI_DISABLE_AUTO_LAUNCH]
COMFYUI_DISABLE_AUTO_LAUNCH]
--cuda-device DEVICE_ID --cuda-device DEVICE_ID
Set the id of the cuda device this instance will use. Set the id of the cuda device this instance will use. [env var: COMFYUI_CUDA_DEVICE]
[env var: COMFYUI_CUDA_DEVICE] --cuda-malloc Enable cudaMallocAsync (enabled by default for torch 2.0 and up). [env var: COMFYUI_CUDA_MALLOC]
--cuda-malloc Enable cudaMallocAsync (enabled by default for torch
2.0 and up). [env var: COMFYUI_CUDA_MALLOC]
--disable-cuda-malloc --disable-cuda-malloc
Disable cudaMallocAsync. [env var: Disable cudaMallocAsync. [env var: COMFYUI_DISABLE_CUDA_MALLOC]
COMFYUI_DISABLE_CUDA_MALLOC] --force-fp32 Force fp32 (If this makes your GPU work better please report it). [env var: COMFYUI_FORCE_FP32]
--dont-upcast-attention
Disable upcasting of attention. Can boost speed but
increase the chances of black images. [env var:
COMFYUI_DONT_UPCAST_ATTENTION]
--force-fp32 Force fp32 (If this makes your GPU work better please
report it). [env var: COMFYUI_FORCE_FP32]
--force-fp16 Force fp16. [env var: COMFYUI_FORCE_FP16] --force-fp16 Force fp16. [env var: COMFYUI_FORCE_FP16]
--bf16-unet Run the UNET in bf16. This should only be used for --force-bf16 Force bf16. [env var: COMFYUI_FORCE_BF16]
testing stuff. [env var: COMFYUI_BF16_UNET] --bf16-unet Run the UNET in bf16. This should only be used for testing stuff. [env var: COMFYUI_BF16_UNET]
--fp16-unet Store unet weights in fp16. [env var: --fp16-unet Store unet weights in fp16. [env var: COMFYUI_FP16_UNET]
COMFYUI_FP16_UNET] --fp8_e4m3fn-unet Store unet weights in fp8_e4m3fn. [env var: COMFYUI_FP8_E4M3FN_UNET]
--fp8_e4m3fn-unet Store unet weights in fp8_e4m3fn. [env var: --fp8_e5m2-unet Store unet weights in fp8_e5m2. [env var: COMFYUI_FP8_E5M2_UNET]
COMFYUI_FP8_E4M3FN_UNET] --fp16-vae Run the VAE in fp16, might cause black images. [env var: COMFYUI_FP16_VAE]
--fp8_e5m2-unet Store unet weights in fp8_e5m2. [env var: --fp32-vae Run the VAE in full precision fp32. [env var: COMFYUI_FP32_VAE]
COMFYUI_FP8_E5M2_UNET]
--fp16-vae Run the VAE in fp16, might cause black images. [env
var: COMFYUI_FP16_VAE]
--fp32-vae Run the VAE in full precision fp32. [env var:
COMFYUI_FP32_VAE]
--bf16-vae Run the VAE in bf16. [env var: COMFYUI_BF16_VAE] --bf16-vae Run the VAE in bf16. [env var: COMFYUI_BF16_VAE]
--cpu-vae Run the VAE on the CPU. [env var: COMFYUI_CPU_VAE] --cpu-vae Run the VAE on the CPU. [env var: COMFYUI_CPU_VAE]
--fp8_e4m3fn-text-enc --fp8_e4m3fn-text-enc
Store text encoder weights in fp8 (e4m3fn variant). Store text encoder weights in fp8 (e4m3fn variant). [env var: COMFYUI_FP8_E4M3FN_TEXT_ENC]
[env var: COMFYUI_FP8_E4M3FN_TEXT_ENC] --fp8_e5m2-text-enc Store text encoder weights in fp8 (e5m2 variant). [env var: COMFYUI_FP8_E5M2_TEXT_ENC]
--fp8_e5m2-text-enc Store text encoder weights in fp8 (e5m2 variant). [env --fp16-text-enc Store text encoder weights in fp16. [env var: COMFYUI_FP16_TEXT_ENC]
var: COMFYUI_FP8_E5M2_TEXT_ENC] --fp32-text-enc Store text encoder weights in fp32. [env var: COMFYUI_FP32_TEXT_ENC]
--fp16-text-enc Store text encoder weights in fp16. [env var:
COMFYUI_FP16_TEXT_ENC]
--fp32-text-enc Store text encoder weights in fp32. [env var:
COMFYUI_FP32_TEXT_ENC]
--directml [DIRECTML_DEVICE] --directml [DIRECTML_DEVICE]
Use torch-directml. [env var: COMFYUI_DIRECTML] Use torch-directml. [env var: COMFYUI_DIRECTML]
--disable-ipex-optimize --disable-ipex-optimize
Disables ipex.optimize when loading models with Intel Disables ipex.optimize when loading models with Intel GPUs. [env var: COMFYUI_DISABLE_IPEX_OPTIMIZE]
GPUs. [env var: COMFYUI_DISABLE_IPEX_OPTIMIZE]
--preview-method [none,auto,latent2rgb,taesd] --preview-method [none,auto,latent2rgb,taesd]
Default preview method for sampler nodes. [env var: Default preview method for sampler nodes. [env var: COMFYUI_PREVIEW_METHOD]
COMFYUI_PREVIEW_METHOD]
--use-split-cross-attention --use-split-cross-attention
Use the split cross attention optimization. Ignored Use the split cross attention optimization. Ignored when xformers is used. [env var: COMFYUI_USE_SPLIT_CROSS_ATTENTION]
when xformers is used. [env var:
COMFYUI_USE_SPLIT_CROSS_ATTENTION]
--use-quad-cross-attention --use-quad-cross-attention
Use the sub-quadratic cross attention optimization . Use the sub-quadratic cross attention optimization . Ignored when xformers is used. [env var: COMFYUI_USE_QUAD_CROSS_ATTENTION]
Ignored when xformers is used. [env var:
COMFYUI_USE_QUAD_CROSS_ATTENTION]
--use-pytorch-cross-attention --use-pytorch-cross-attention
Use the new pytorch 2.0 cross attention function. [env Use the new pytorch 2.0 cross attention function. [env var: COMFYUI_USE_PYTORCH_CROSS_ATTENTION]
var: COMFYUI_USE_PYTORCH_CROSS_ATTENTION]
--disable-xformers Disable xformers. [env var: COMFYUI_DISABLE_XFORMERS] --disable-xformers Disable xformers. [env var: COMFYUI_DISABLE_XFORMERS]
--gpu-only Store and run everything (text encoders/CLIP models, --force-upcast-attention
etc... on the GPU). [env var: COMFYUI_GPU_ONLY] Force enable attention upcasting, please report if it fixes black images. [env var: COMFYUI_FORCE_UPCAST_ATTENTION]
--highvram By default models will be unloaded to CPU memory after --dont-upcast-attention
being used. This option keeps them in GPU memory. [env Disable all upcasting of attention. Should be unnecessary except for debugging. [env var: COMFYUI_DONT_UPCAST_ATTENTION]
var: COMFYUI_HIGHVRAM] --gpu-only Store and run everything (text encoders/CLIP models, etc... on the GPU). [env var: COMFYUI_GPU_ONLY]
--normalvram Used to force normal vram use if lowvram gets --highvram By default models will be unloaded to CPU memory after being used. This option keeps them in GPU memory. [env var: COMFYUI_HIGHVRAM]
automatically enabled. [env var: COMFYUI_NORMALVRAM] --normalvram Used to force normal vram use if lowvram gets automatically enabled. [env var: COMFYUI_NORMALVRAM]
--lowvram Split the unet in parts to use less vram. [env var: --lowvram Split the unet in parts to use less vram. [env var: COMFYUI_LOWVRAM]
COMFYUI_LOWVRAM]
--novram When lowvram isn't enough. [env var: COMFYUI_NOVRAM] --novram When lowvram isn't enough. [env var: COMFYUI_NOVRAM]
--cpu To use the CPU for everything (slow). [env var: --cpu To use the CPU for everything (slow). [env var: COMFYUI_CPU]
COMFYUI_CPU]
--disable-smart-memory --disable-smart-memory
Force ComfyUI to agressively offload to regular ram Force ComfyUI to agressively offload to regular ram instead of keeping models in vram when it can. [env var: COMFYUI_DISABLE_SMART_MEMORY]
instead of keeping models in vram when it can. [env --deterministic Make pytorch use slower deterministic algorithms when it can. Note that this might not make images deterministic in all cases. [env var: COMFYUI_DETERMINISTIC]
var: COMFYUI_DISABLE_SMART_MEMORY] --dont-print-server Don't print server output. [env var: COMFYUI_DONT_PRINT_SERVER]
--deterministic Make pytorch use slower deterministic algorithms when --quick-test-for-ci Quick test for CI. Raises an error if nodes cannot be imported, [env var: COMFYUI_QUICK_TEST_FOR_CI]
it can. Note that this might not make images
deterministic in all cases. [env var:
COMFYUI_DETERMINISTIC]
--dont-print-server Don't print server output. [env var:
COMFYUI_DONT_PRINT_SERVER]
--quick-test-for-ci Quick test for CI. [env var:
COMFYUI_QUICK_TEST_FOR_CI]
--windows-standalone-build --windows-standalone-build
Windows standalone build: Enable convenient things Windows standalone build: Enable convenient things that most people using the standalone windows build will probably enjoy (like auto opening the page on startup). [env var:
that most people using the standalone windows build COMFYUI_WINDOWS_STANDALONE_BUILD]
will probably enjoy (like auto opening the page on --disable-metadata Disable saving prompt metadata in files. [env var: COMFYUI_DISABLE_METADATA]
startup). [env var: COMFYUI_WINDOWS_STANDALONE_BUILD] --multi-user Enables per-user storage. [env var: COMFYUI_MULTI_USER]
--disable-metadata Disable saving prompt metadata in files. [env var: --create-directories Creates the default models/, input/, output/ and temp/ directories, then exits. [env var: COMFYUI_CREATE_DIRECTORIES]
COMFYUI_DISABLE_METADATA]
--multi-user Enables per-user storage. [env var:
COMFYUI_MULTI_USER]
--create-directories Creates the default models/, input/, output/ and temp/
directories, then exits. [env var:
COMFYUI_CREATE_DIRECTORIES]
--plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL --plausible-analytics-base-url PLAUSIBLE_ANALYTICS_BASE_URL
Enables server-side analytics events sent to the Enables server-side analytics events sent to the provided URL. [env var: COMFYUI_PLAUSIBLE_ANALYTICS_BASE_URL]
provided URL. [env var:
COMFYUI_PLAUSIBLE_ANALYTICS_BASE_URL]
--plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN --plausible-analytics-domain PLAUSIBLE_ANALYTICS_DOMAIN
Specifies the domain name for analytics events. [env Specifies the domain name for analytics events. [env var: COMFYUI_PLAUSIBLE_ANALYTICS_DOMAIN]
var: COMFYUI_PLAUSIBLE_ANALYTICS_DOMAIN]
--analytics-use-identity-provider --analytics-use-identity-provider
Uses platform identifiers for unique visitor Uses platform identifiers for unique visitor analytics. [env var: COMFYUI_ANALYTICS_USE_IDENTITY_PROVIDER]
analytics. [env var:
COMFYUI_ANALYTICS_USE_IDENTITY_PROVIDER]
--distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI --distributed-queue-connection-uri DISTRIBUTED_QUEUE_CONNECTION_URI
EXAMPLE: "amqp://guest:guest@127.0.0.1" - Servers and EXAMPLE: "amqp://guest:guest@127.0.0.1" - Servers and clients will connect to this AMPQ URL to form a distributed queue and exchange prompt execution requests and progress
clients will connect to this AMPQ URL to form a updates. [env var: COMFYUI_DISTRIBUTED_QUEUE_CONNECTION_URI]
distributed queue and exchange prompt execution
requests and progress updates. [env var:
COMFYUI_DISTRIBUTED_QUEUE_CONNECTION_URI]
--distributed-queue-worker --distributed-queue-worker
Workers will pull requests off the AMQP URL. [env var: Workers will pull requests off the AMQP URL. [env var: COMFYUI_DISTRIBUTED_QUEUE_WORKER]
COMFYUI_DISTRIBUTED_QUEUE_WORKER]
--distributed-queue-frontend --distributed-queue-frontend
Frontends will start the web UI and connect to the Frontends will start the web UI and connect to the provided AMQP URL to submit prompts. [env var: COMFYUI_DISTRIBUTED_QUEUE_FRONTEND]
provided AMQP URL to submit prompts. [env var:
COMFYUI_DISTRIBUTED_QUEUE_FRONTEND]
--distributed-queue-name DISTRIBUTED_QUEUE_NAME --distributed-queue-name DISTRIBUTED_QUEUE_NAME
This name will be used by the frontends and workers to This name will be used by the frontends and workers to exchange prompt requests and replies. Progress updates will be prefixed by the queue name, followed by a '.', then the
exchange prompt requests and replies. Progress updates user ID [env var: COMFYUI_DISTRIBUTED_QUEUE_NAME]
will be prefixed by the queue name, followed by a '.', --external-address EXTERNAL_ADDRESS
then the user ID [env var: Specifies a base URL for external addresses reported by the API, such as for image paths. [env var: COMFYUI_EXTERNAL_ADDRESS]
COMFYUI_DISTRIBUTED_QUEUE_NAME] --verbose Enables more debug prints. [env var: COMFYUI_VERBOSE]
--disable-known-models
Disables automatic downloads of known models and prevents them from appearing in the UI. [env var: COMFYUI_DISABLE_KNOWN_MODELS]
--max-queue-size MAX_QUEUE_SIZE
The API will reject prompt requests if the queue's size exceeds this value. [env var: COMFYUI_MAX_QUEUE_SIZE]
--otel-service-name OTEL_SERVICE_NAME
The name of the service or application that is generating telemetry data. [env var: OTEL_SERVICE_NAME]
--otel-service-version OTEL_SERVICE_VERSION
The version of the service or application that is generating telemetry data. [env var: OTEL_SERVICE_VERSION]
--otel-exporter-otlp-endpoint OTEL_EXPORTER_OTLP_ENDPOINT
A base endpoint URL for any signal type, with an optionally-specified port number. Helpful for when you're sending more than one signal to the same endpoint and want one
environment variable to control the endpoint. [env var: OTEL_EXPORTER_OTLP_ENDPOINT]
Args that start with '--' can also be set in a config file (config.yaml or Args that start with '--' can also be set in a config file (config.yaml or config.json or specified via -c). Config file syntax allows: key=value, flag=true, stuff=[a,b,c] (for details, see syntax at
config.json or specified via -c). Config file syntax allows: key=value, https://goo.gl/R74nmi). In general, command-line values override environment variables which override config file values which override defaults.
flag=true, stuff=[a,b,c] (for details, see syntax at https://goo.gl/R74nmi).
In general, command-line values override environment variables which override
config file values which override defaults.
``` ```
# Using ComfyUI as an API / Programmatically # Using ComfyUI as an API / Programmatically

View File

@ -1,8 +1,8 @@
#Rename this to extra_model_paths.yaml and ComfyUI will load it #Rename this to extra_model_paths.yaml and ComfyUI will load it
#config for a1111 ui # Config for a1111 ui
#all you have to do is change the base_path to where yours is installed # All you have to do is change the base_path to where yours is installed
a1111: a1111:
base_path: path/to/stable-diffusion-webui/ base_path: path/to/stable-diffusion-webui/
checkpoints: models/Stable-diffusion checkpoints: models/Stable-diffusion
@ -19,8 +19,8 @@ a1111:
hypernetworks: models/hypernetworks hypernetworks: models/hypernetworks
controlnet: models/ControlNet controlnet: models/ControlNet
#config for comfyui # Config for comfyui
#your base path should be either an existing comfy install or a central folder where you store all of your models, loras, etc. # Your base path should be either an existing comfy install or a central folder where you store all of your models, loras, etc.
#comfyui: #comfyui:
# base_path: path/to/comfyui/ # base_path: path/to/comfyui/