The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Go to file
2025-11-10 00:37:49 +00:00
.ci Tell users to update their nvidia drivers if portable doesn't start. (#10518) 2025-10-28 15:08:08 -04:00
.github Merge upstream/master, keep local README.md 2025-11-08 00:32:59 +00:00
alembic_db Add support for sqlite database (#8444) 2025-06-11 16:43:39 -04:00
api_server Add Load Image Output node (#6790) 2025-02-18 17:53:01 -05:00
app Add custom node published subgraphs endpoint (#10438) 2025-10-21 23:16:16 -04:00
comfy Unload weights if vram usage goes up between runs. (#10690) 2025-11-09 18:51:33 -05:00
comfy_api api-nodes: fixed dynamic pricing format; import comfy_io directly (#10336) 2025-10-13 23:55:56 -07:00
comfy_api_nodes feat(API-nodes): move Rodin3D nodes to new client; removed old api client.py (#10645) 2025-11-05 02:16:00 -08:00
comfy_config Add new fields to the config types (#8507) 2025-06-18 15:12:29 -04:00
comfy_execution caching: Handle None outputs tuple case (#10637) 2025-11-04 14:14:10 -08:00
comfy_extras convert nodes_hypernetwork.py to V3 schema (#10583) 2025-11-03 00:21:47 -08:00
custom_nodes update example_node to use V3 schema (#9723) 2025-10-02 15:20:29 -07:00
input LoadLatent and SaveLatent should behave like the LoadImage and SaveImage. 2023-05-18 00:09:12 -04:00
middleware dont cache new locale entry points (#10101) 2025-09-29 12:16:02 -07:00
models Add models/audio_encoders directory. (#9548) 2025-08-25 20:13:54 -04:00
output Initial commit. 2023-01-16 22:37:14 -05:00
script_examples Update comment in api example. (#9708) 2025-09-03 18:43:29 -04:00
tests execution: fold in dependency aware caching / Fix --cache-none with loops/lazy etc (Resubmit) (#10440) 2025-10-22 15:49:05 -04:00
tests-unit Fix torch compile regression on fp8 ops. (#10580) 2025-11-01 00:25:17 -04:00
utils Add support for sqlite database (#8444) 2025-06-11 16:43:39 -04:00
.gitattributes Add Veo3 video generation node with audio support (#9110) 2025-08-05 01:52:25 -04:00
.gitignore More API Nodes (#7956) 2025-05-06 04:23:00 -04:00
alembic.ini Add support for sqlite database (#8444) 2025-06-11 16:43:39 -04:00
CODEOWNERS Add @kosinkadink as code owner (#10041) 2025-09-26 17:08:16 -04:00
comfyui_version.py ComfyUI version v0.3.68 2025-11-04 19:42:23 -05:00
CONTRIBUTING.md Add CONTRIBUTING.md (#3910) 2024-07-01 13:51:00 -04:00
cuda_malloc.py Turn off cuda malloc by default when --fast autotune is turned on. (#10393) 2025-10-18 22:35:46 -04:00
Dockerfile feat(dockerfile): enable PEP 517 globally and preinstall Manager deps 2025-10-02 11:14:34 -06:00
entrypoint.sh refactor(entrypoint): single-pass GPU checks, preserved env across user switch, streamlined SageAttention build/cleanup 2025-10-03 13:27:33 -06:00
execution.py Add RAM Pressure cache mode (#10454) 2025-10-30 17:39:02 -04:00
extra_model_paths.yaml.example Update the extra_model_paths.yaml.example (#10319) 2025-10-12 23:54:41 -04:00
folder_paths.py Add models/audio_encoders directory. (#9548) 2025-08-25 20:13:54 -04:00
hook_breaker_ac10a0.py Prevent custom nodes from hooking certain functions. (#7825) 2025-04-26 20:52:56 -04:00
latent_preview.py Convert latents_ubyte to 8-bit unsigned int before converting to CPU (#6300) 2025-01-28 08:22:54 -05:00
LICENSE Initial commit. 2023-01-16 22:37:14 -05:00
main.py Add RAM Pressure cache mode (#10454) 2025-10-30 17:39:02 -04:00
new_updater.py Replace print with logging (#6138) 2024-12-20 16:24:55 -05:00
node_helpers.py Add append feature to conditioning_set_values (#8239) 2025-05-22 08:11:13 -04:00
nodes.py Add a ScaleROPE node. Currently only works on WAN models. (#10559) 2025-10-30 22:11:38 -04:00
protocol.py Support for async node functions (#8830) 2025-07-10 14:46:19 -04:00
pyproject.toml ComfyUI version v0.3.68 2025-11-04 19:42:23 -05:00
pytest.ini Execution Model Inversion (#2666) 2024-08-15 11:21:11 -04:00
README.md Clarify release cycle. (#10667) 2025-11-06 04:11:30 -05:00
requirements.txt chore: update workflow templates to v0.2.11 (#10634) 2025-11-04 10:51:53 -08:00
server.py Remove comfy api key from queue api. (#10502) 2025-10-28 03:23:52 -04:00

ComfyUI-Docker

An automated Repo for ComfyUI Docker image builds, optimized for NVIDIA GPUs.

AboutFeaturesGetting StartedUsageLicense


About

This image packages upstream ComfyUI with CUDA-enabled PyTorch and an entrypoint that can build SageAttention at container startup for modern NVIDIA GPUs.

The base image is python:3.12-slim (Debian trixie) with CUDA 12.9 developer libraries installed via apt and PyTorch installed from the cu129 wheel index.

It syncs with the upstream ComfyUI repository, builds a Docker image on new releases, and pushes it to GitHub Container Registry (GHCR).

I created this repo for myself as a simple way to stay up to date with the latest ComfyUI versions while having an easy-to-use Docker image.


Features

  • Daily checks for upstream releases, auto-merges changes, and builds/pushes Docker images.
  • CUDA-enabled PyTorch + Triton on Debian trixie with CUDA 12.9 dev libs so custom CUDA builds work at runtime.
  • Non-root runtime with PUID/PGID mapping handled by entrypoint for volume permissions.
  • ComfyUI-Manager auto-sync on startup; entrypoint scans custom_nodes and installs requirements when COMFY_AUTO_INSTALL=1.
  • SageAttention build-on-start with TORCH_CUDA_ARCH_LIST tuned to detected GPUs; enabling is opt-in at runtime via FORCE_SAGE_ATTENTION=1.

Getting Started

  • Install NVIDIA Container Toolkit on the host, then use docker run --gpus all or Compose GPU reservations to pass GPUs through.
  • Expose the ComfyUI server on port 8188 (default) and map volumes for models, inputs, outputs, and custom_nodes.

Pulling the Image

The latest image is available on GHCR:

docker pull ghcr.io/clsferguson/comfyui-docker:latest

For a specific version (synced with upstream tags, starting at 0.3.59):

docker pull ghcr.io/clsferguson/comfyui-docker:vX.Y.Z

Docker Compose

For easier management, use this docker-compose.yml:

services:
  comfyui:
    image: ghcr.io/clsferguson/comfyui-docker:latest
    container_name: ComfyUI
    runtime: nvidia
    restart: unless-stopped
    ports:
      - 8188:8188
    environment:
      - TZ=America/Edmonton
      - PUID=1000
      - GUID=1000
    gpus: all
    volumes:
      - comfyui_data:/app/ComfyUI/user/default
      - comfyui_nodes:/app/ComfyUI/custom_nodes
      - /mnt/comfyui/models:/app/ComfyUI/models
      - /mnt/comfyui/input:/app/ComfyUI/input
      - /mnt/comfyui/output:/app/ComfyUI/output

Run with docker compose up -d.


Usage

  • Open http://localhost:8188 after the container is up; change the external port via -p HOST:8188 or the internal port with ComfyUI --port/--listen.
  • To target specific GPUs, use Dockers GPU device selections or Compose device_ids in reservations.

SageAttention

  • The entrypoint builds and caches SageAttention on startup when GPUs are detected; runtime activation is controlled by FORCE_SAGE_ATTENTION=1.
  • If the SageAttention import test fails, the entrypoint logs a warning and starts ComfyUI without --use-sage-attention even if FORCE_SAGE_ATTENTION=1.
  • To enable: set FORCE_SAGE_ATTENTION=1 and restart; to disable, omit or set to 0.

Environment Variables

  • PUID/PGID: map container user to host UID/GID for volume write access.
  • COMFY_AUTO_INSTALL=1: auto-install Python requirements from custom_nodes on startup.
  • FORCE_SAGE_ATTENTION=0|1: if 1 and the module import test passes, the entrypoint adds --use-sage-attention.

License

Distributed under the MIT License (same as upstream ComfyUI). See LICENSE for more information.


Contact

Built with ❤️ for easy AI workflows.