mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-01-08 13:20:50 +08:00
Update compose file to use tagged image
This commit is contained in:
parent
7cd6383110
commit
7179ed44bd
59
README.md
59
README.md
@ -14,6 +14,7 @@ A vanilla, up-to-date fork of [ComfyUI](https://github.com/comfyanonymous/comfyu
|
||||
- [New configuration options](#command-line-arguments) for directories, models and metrics.
|
||||
- [API](#using-comfyui-as-an-api--programmatically) support, using the vanilla ComfyUI API and new API endpoints.
|
||||
- [Embed](#embedded) ComfyUI as a library inside your Python application. No server or frontend needed.
|
||||
- [Docker Compose](#docker-compose) for running on Linux and Windows with CUDA acceleration.
|
||||
- [Containers](#containers) for running on Linux, Windows and Kubernetes with CUDA acceleration.
|
||||
- Automated tests for new features.
|
||||
|
||||
@ -1373,6 +1374,62 @@ Since reading models like large checkpoints over the network can be slow, you ca
|
||||
|
||||
Known models listed in [**model_downloader.py**](./comfy/model_downloader.py) are downloaded using `huggingface_hub` with the default `cache_dir`. This means you can mount a read-write-many volume, like an SMB share, into the default cache directory. Read more about this [here](https://huggingface.co/docs/huggingface_hub/en/guides/download).
|
||||
|
||||
# Docker Compose
|
||||
|
||||
This repository includes a `docker-compose.yml` file to simplify running ComfyUI with Docker.
|
||||
|
||||
## Docker Volumes vs. Local Directories
|
||||
|
||||
By default, the `docker-compose.yml` file uses a Docker-managed volume named `workspace_data`. This volume stores all of ComfyUI's data, including models, inputs, and outputs. This is the most straightforward way to get started, but it can be less convenient if you want to manage these files directly from your host machine.
|
||||
|
||||
For more direct control, you can configure Docker Compose to use local directories (bind mounts) instead. This maps folders on your host machine directly into the container.
|
||||
|
||||
To switch to using local directories, edit `docker-compose.yml`:
|
||||
|
||||
1. In both the `backend` and `frontend` services, replace `- workspace_data:/workspace` with the specific local directories you want to mount. For example:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
backend:
|
||||
volumes:
|
||||
# - workspace_data:/workspace # Comment out or remove this line
|
||||
- ./models:/workspace/models
|
||||
- ./custom_nodes:/workspace/custom_nodes
|
||||
- ./output:/workspace/output
|
||||
- ./input:/workspace/input
|
||||
...
|
||||
frontend:
|
||||
volumes:
|
||||
# - workspace_data:/workspace # Comment out or remove this line
|
||||
- ./models:/workspace/models
|
||||
- ./custom_nodes:/workspace/custom_nodes
|
||||
- ./output:/workspace/output
|
||||
- ./input:/workspace/input
|
||||
```
|
||||
|
||||
2. At the bottom of the file, remove or comment out the `workspace_data: {}` definition under `volumes`.
|
||||
|
||||
```yaml
|
||||
volumes:
|
||||
# workspace_data: {} # Comment out or remove this line
|
||||
```
|
||||
|
||||
Before running `docker compose up`, make sure the local directories (`./models`, `./custom_nodes`, etc.) exist in the same directory as your `docker-compose.yml` file.
|
||||
|
||||
The example `docker-compose` file contains other configuration settings. You can also use bind-mount volumes in the `volumes` key of the whole compose file. Read it carefully.
|
||||
|
||||
## Running with Docker Compose
|
||||
|
||||
### Linux
|
||||
|
||||
Before you begin, you must have the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) for Docker installed to enable GPU acceleration.
|
||||
|
||||
### Starting the Stack
|
||||
|
||||
```shell
|
||||
docker compose up
|
||||
```
|
||||
|
||||
# Containers
|
||||
|
||||
On NVIDIA:
|
||||
@ -1417,4 +1474,4 @@ Your sponsorship enables us to dedicate significant engineering time to this pro
|
||||
|
||||
A lot of work goes into maintaining a project of this scale—not just software development, but also responding to issues, reviewing contributions, writing high-quality documentation, testing across different platforms (Windows, Linux, macOS, CUDA, ROCm), and keeping the ecosystem of dependent custom nodes up-to-date.
|
||||
|
||||
Your financial support allows us to prioritize these tasks and continue to move this project and its community forward.
|
||||
Your financial support allows us to prioritize these tasks and continue to move this project and its community forward.
|
||||
|
||||
@ -1,9 +1,12 @@
|
||||
name: "comfyui"
|
||||
services:
|
||||
backend:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
# USING THE BUILT IMAGE FROM GITHUB
|
||||
image: "ghcr.io/hiddenswitch/comfyui:latest"
|
||||
# OR: USE THE LOCAL DOCKERFILE FROM GIT REPO
|
||||
# build:
|
||||
# context: .
|
||||
# dockerfile: Dockerfile
|
||||
volumes:
|
||||
# USING DOCKER MANAGED VOLUMES
|
||||
- workspace_data:/workspace
|
||||
@ -35,9 +38,12 @@ services:
|
||||
start_period: 10s
|
||||
restart: unless-stopped
|
||||
frontend:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
# USING THE BUILT IMAGE FROM GITHUB
|
||||
image: "ghcr.io/hiddenswitch/comfyui:latest"
|
||||
# OR: USE THE LOCAL DOCKERFILE FROM GIT REPO
|
||||
# build:
|
||||
# context: .
|
||||
# dockerfile: Dockerfile
|
||||
deploy:
|
||||
replicas: 1
|
||||
volumes:
|
||||
|
||||
Loading…
Reference in New Issue
Block a user