The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Go to file
2025-12-13 00:35:31 +00:00
.ci Make portable updater work with repos in unmerged state. (#11281) 2025-12-11 18:56:33 -05:00
.github Merge upstream/master, keep local README.md 2025-12-01 00:44:19 +00:00
alembic_db Add support for sqlite database (#8444) 2025-06-11 16:43:39 -04:00
api_server Fix: filter hidden files from /internal/files endpoint (#11191) 2025-12-10 21:49:49 -05:00
app feat(security): add System User protection with __ prefix (#10966) 2025-11-28 21:28:42 -05:00
comfy Fix bias dtype issue in mixed ops. (#11293) 2025-12-12 11:49:35 -05:00
comfy_api WanMove support (#11247) 2025-12-11 22:29:34 -05:00
comfy_api_nodes feat(api-nodes): new TextToVideoWithAudio and ImageToVideoWithAudio nodes (#11267) 2025-12-12 00:18:31 -08:00
comfy_config Add new fields to the config types (#8507) 2025-06-18 15:12:29 -04:00
comfy_execution Add MatchType, DynamicCombo, and Autogrow support to V3 Schema (#10832) 2025-12-03 00:17:13 -05:00
comfy_extras WanMove support (#11247) 2025-12-11 22:29:34 -05:00
custom_nodes update example_node to use V3 schema (#9723) 2025-10-02 15:20:29 -07:00
input LoadLatent and SaveLatent should behave like the LoadImage and SaveImage. 2023-05-18 00:09:12 -04:00
middleware dont cache new locale entry points (#10101) 2025-09-29 12:16:02 -07:00
models HunyuanVideo 1.5 (#10819) 2025-11-20 22:44:43 -05:00
output Initial commit. 2023-01-16 22:37:14 -05:00
script_examples Update comment in api example. (#9708) 2025-09-03 18:43:29 -04:00
tests [fix] Fixes non-async public API access (#10857) 2025-11-23 22:56:20 -08:00
tests-unit Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
utils Add support for sqlite database (#8444) 2025-06-11 16:43:39 -04:00
.gitattributes Add Veo3 video generation node with audio support (#9110) 2025-08-05 01:52:25 -04:00
.gitignore More API Nodes (#7956) 2025-05-06 04:23:00 -04:00
alembic.ini Add support for sqlite database (#8444) 2025-06-11 16:43:39 -04:00
CODEOWNERS Fix CODEOWNERS formatting to have all on the same line, otherwise only last line applies (#11053) 2025-12-02 11:46:29 -08:00
comfyui_version.py ComfyUI version v0.4.0 2025-12-09 18:26:49 -05:00
CONTRIBUTING.md Add CONTRIBUTING.md (#3910) 2024-07-01 13:51:00 -04:00
cuda_malloc.py Set OCL_SET_SVM_SIZE on AMD. (#11139) 2025-12-06 00:15:21 -05:00
Dockerfile feat(dockerfile): enable PEP 517 globally and preinstall Manager deps 2025-10-02 11:14:34 -06:00
entrypoint.sh refactor(entrypoint): single-pass GPU checks, preserved env across user switch, streamlined SageAttention build/cleanup 2025-10-03 13:27:33 -06:00
execution.py Add MatchType, DynamicCombo, and Autogrow support to V3 Schema (#10832) 2025-12-03 00:17:13 -05:00
extra_model_paths.yaml.example Update the extra_model_paths.yaml.example (#10319) 2025-10-12 23:54:41 -04:00
folder_paths.py feat(security): add System User protection with __ prefix (#10966) 2025-11-28 21:28:42 -05:00
hook_breaker_ac10a0.py Prevent custom nodes from hooking certain functions. (#7825) 2025-04-26 20:52:56 -04:00
latent_preview.py Support video tiny VAEs (#10884) 2025-11-28 19:40:19 -05:00
LICENSE Initial commit. 2023-01-16 22:37:14 -05:00
main.py Set OCL_SET_SVM_SIZE on AMD. (#11139) 2025-12-06 00:15:21 -05:00
manager_requirements.txt docs: add ComfyUI-Manager documentation and update to v4.0.3b4 (#11133) 2025-12-05 15:45:38 -08:00
new_updater.py Replace print with logging (#6138) 2024-12-20 16:24:55 -05:00
node_helpers.py Add append feature to conditioning_set_values (#8239) 2025-05-22 08:11:13 -04:00
nodes.py WanMove support (#11247) 2025-12-11 22:29:34 -05:00
protocol.py Support for async node functions (#8830) 2025-07-10 14:46:19 -04:00
pyproject.toml ComfyUI version v0.4.0 2025-12-09 18:26:49 -05:00
pytest.ini Execution Model Inversion (#2666) 2024-08-15 11:21:11 -04:00
QUANTIZATION.md Quantized Ops fixes (#10715) 2025-11-12 18:26:52 -05:00
README.md docs: add ComfyUI-Manager documentation and update to v4.0.3b4 (#11133) 2025-12-05 15:45:38 -08:00
requirements.txt bump comfyui-frontend-package to 1.34.8 (#11220) 2025-12-09 22:27:07 -05:00
server.py Added PATCH method to CORS headers (#11066) 2025-12-02 22:29:27 -05:00

ComfyUI-Docker

An automated Repo for ComfyUI Docker image builds, optimized for NVIDIA GPUs.

AboutFeaturesGetting StartedUsageLicense


About

This image packages upstream ComfyUI with CUDA-enabled PyTorch and an entrypoint that can build SageAttention at container startup for modern NVIDIA GPUs.

The base image is python:3.12-slim (Debian trixie) with CUDA 12.9 developer libraries installed via apt and PyTorch installed from the cu129 wheel index.

It syncs with the upstream ComfyUI repository, builds a Docker image on new releases, and pushes it to GitHub Container Registry (GHCR).

I created this repo for myself as a simple way to stay up to date with the latest ComfyUI versions while having an easy-to-use Docker image.


Features

  • Daily checks for upstream releases, auto-merges changes, and builds/pushes Docker images.
  • CUDA-enabled PyTorch + Triton on Debian trixie with CUDA 12.9 dev libs so custom CUDA builds work at runtime.
  • Non-root runtime with PUID/PGID mapping handled by entrypoint for volume permissions.
  • ComfyUI-Manager auto-sync on startup; entrypoint scans custom_nodes and installs requirements when COMFY_AUTO_INSTALL=1.
  • SageAttention build-on-start with TORCH_CUDA_ARCH_LIST tuned to detected GPUs; enabling is opt-in at runtime via FORCE_SAGE_ATTENTION=1.

Getting Started

  • Install NVIDIA Container Toolkit on the host, then use docker run --gpus all or Compose GPU reservations to pass GPUs through.
  • Expose the ComfyUI server on port 8188 (default) and map volumes for models, inputs, outputs, and custom_nodes.

Pulling the Image

The latest image is available on GHCR:

docker pull ghcr.io/clsferguson/comfyui-docker:latest

For a specific version (synced with upstream tags, starting at 0.3.59):

docker pull ghcr.io/clsferguson/comfyui-docker:vX.Y.Z

Docker Compose

For easier management, use this docker-compose.yml:

services:
  comfyui:
    image: ghcr.io/clsferguson/comfyui-docker:latest
    container_name: ComfyUI
    runtime: nvidia
    restart: unless-stopped
    ports:
      - 8188:8188
    environment:
      - TZ=America/Edmonton
      - PUID=1000
      - GUID=1000
    gpus: all
    volumes:
      - comfyui_data:/app/ComfyUI/user/default
      - comfyui_nodes:/app/ComfyUI/custom_nodes
      - /mnt/comfyui/models:/app/ComfyUI/models
      - /mnt/comfyui/input:/app/ComfyUI/input
      - /mnt/comfyui/output:/app/ComfyUI/output

Run with docker compose up -d.


Usage

  • Open http://localhost:8188 after the container is up; change the external port via -p HOST:8188 or the internal port with ComfyUI --port/--listen.
  • To target specific GPUs, use Dockers GPU device selections or Compose device_ids in reservations.

SageAttention

  • The entrypoint builds and caches SageAttention on startup when GPUs are detected; runtime activation is controlled by FORCE_SAGE_ATTENTION=1.
  • If the SageAttention import test fails, the entrypoint logs a warning and starts ComfyUI without --use-sage-attention even if FORCE_SAGE_ATTENTION=1.
  • To enable: set FORCE_SAGE_ATTENTION=1 and restart; to disable, omit or set to 0.

Environment Variables

  • PUID/PGID: map container user to host UID/GID for volume write access.
  • COMFY_AUTO_INSTALL=1: auto-install Python requirements from custom_nodes on startup.
  • FORCE_SAGE_ATTENTION=0|1: if 1 and the module import test passes, the entrypoint adds --use-sage-attention.

License

Distributed under the MIT License (same as upstream ComfyUI). See LICENSE for more information.


Contact

Built with ❤️ for easy AI workflows.