- Add an early runtime check that exits cleanly when no compatible NVIDIA GPU is detected, preventing unnecessary installs and builds on hosts without GPUs, which matches the repo’s requirement to target recent-gen NVIDIA GPUs and avoids work on GitHub runners. - Mirror ComfyUI-Manager’s dependency behavior for custom nodes by: installing requirements*.txt and requirements/*.txt, building nodes with pyproject.toml using pip, and invoking node-provided install.py scripts when present, aligning with documented custom-node install flows. - Enforce user-level pip installs (PIP_USER=1) and ensure /usr/local site-packages trees are owned and writable by the runtime user; this resolves permission-denied errors seen when Manager updates or removes packages (e.g., numpy __pycache__), improving reliability of Manager-driven installs and uninstalls. |
||
|---|---|---|
| .ci | ||
| .github | ||
| alembic_db | ||
| api_server | ||
| app | ||
| comfy | ||
| comfy_api | ||
| comfy_api_nodes | ||
| comfy_config | ||
| comfy_execution | ||
| comfy_extras | ||
| custom_nodes | ||
| input | ||
| middleware | ||
| models | ||
| output | ||
| script_examples | ||
| tests | ||
| tests-unit | ||
| utils | ||
| .gitattributes | ||
| .gitignore | ||
| alembic.ini | ||
| CODEOWNERS | ||
| comfyui_version.py | ||
| CONTRIBUTING.md | ||
| cuda_malloc.py | ||
| Dockerfile | ||
| entrypoint.sh | ||
| execution.py | ||
| extra_model_paths.yaml.example | ||
| folder_paths.py | ||
| hook_breaker_ac10a0.py | ||
| latent_preview.py | ||
| LICENSE | ||
| main.py | ||
| new_updater.py | ||
| node_helpers.py | ||
| nodes.py | ||
| protocol.py | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| server.py | ||
ComfyUI-Docker
An automated Repo for ComfyUI Docker image builds, optimized for NVIDIA GPUs.
About • Features • Getting Started • Usage • License
About
This image packages upstream ComfyUI with CUDA-enabled PyTorch and an entrypoint that can build SageAttention at container startup for modern NVIDIA GPUs.
The base image is python:3.12-slim (Debian trixie) with CUDA 12.9 developer libraries installed via apt and PyTorch installed from the cu129 wheel index.
It syncs with the upstream ComfyUI repository, builds a Docker image on new releases, and pushes it to GitHub Container Registry (GHCR).
I created this repo for myself as a simple way to stay up to date with the latest ComfyUI versions while having an easy-to-use Docker image.
Features
- Daily checks for upstream releases, auto-merges changes, and builds/pushes Docker images.
- CUDA-enabled PyTorch + Triton on Debian trixie with CUDA 12.9 dev libs so custom CUDA builds work at runtime.
- Non-root runtime with PUID/PGID mapping handled by entrypoint for volume permissions.
- ComfyUI-Manager auto-sync on startup; entrypoint scans custom_nodes and installs requirements when COMFY_AUTO_INSTALL=1.
- SageAttention build-on-start with TORCH_CUDA_ARCH_LIST tuned to detected GPUs; enabling is opt-in at runtime via FORCE_SAGE_ATTENTION=1.
Getting Started
- Install NVIDIA Container Toolkit on the host, then use docker run --gpus all or Compose GPU reservations to pass GPUs through.
- Expose the ComfyUI server on port 8188 (default) and map volumes for models, inputs, outputs, and custom_nodes.
Pulling the Image
The latest image is available on GHCR:
docker pull ghcr.io/clsferguson/comfyui-docker:latest
For a specific version (synced with upstream tags, starting at 0.3.59):
docker pull ghcr.io/clsferguson/comfyui-docker:vX.Y.Z
Docker Compose
For easier management, use this docker-compose.yml:
services:
comfyui:
image: ghcr.io/clsferguson/comfyui-docker:latest
container_name: ComfyUI
runtime: nvidia
restart: unless-stopped
ports:
- 8188:8188
environment:
- TZ=America/Edmonton
- PUID=1000
- GUID=1000
gpus: all
volumes:
- comfyui_data:/app/ComfyUI/user/default
- comfyui_nodes:/app/ComfyUI/custom_nodes
- /mnt/comfyui/models:/app/ComfyUI/models
- /mnt/comfyui/input:/app/ComfyUI/input
- /mnt/comfyui/output:/app/ComfyUI/output
Run with docker compose up -d.
Usage
- Open http://localhost:8188 after the container is up; change the external port via -p HOST:8188 or the internal port with ComfyUI --port/--listen.
- To target specific GPUs, use Docker’s GPU device selections or Compose device_ids in reservations.
SageAttention
- The entrypoint builds and caches SageAttention on startup when GPUs are detected; runtime activation is controlled by FORCE_SAGE_ATTENTION=1.
- If the SageAttention import test fails, the entrypoint logs a warning and starts ComfyUI without --use-sage-attention even if FORCE_SAGE_ATTENTION=1.
- To enable: set FORCE_SAGE_ATTENTION=1 and restart; to disable, omit or set to 0.
Environment Variables
- PUID/PGID: map container user to host UID/GID for volume write access.
- COMFY_AUTO_INSTALL=1: auto-install Python requirements from custom_nodes on startup.
- FORCE_SAGE_ATTENTION=0|1: if 1 and the module import test passes, the entrypoint adds --use-sage-attention.
License
Distributed under the MIT License (same as upstream ComfyUI). See LICENSE for more information.
Contact
- Creator: clsferguson - GitHub
- Project Link: https://github.com/clsferguson/ComfyUI-Docker
Built with ❤️ for easy AI workflows.