Add runtime patching of CUDA math_functions.h to resolve compilation conflicts between CUDA 12.9 and glibc 2.41 used in Debian Trixie, enabling successful Sage Attention builds. Root Cause: CUDA 12.9 was compiled with older glibc and lacks noexcept(true) specifications for math functions (sinpi, cospi, sinpif, cospif) that glibc 2.41 requires, causing "exception specification is incompatible" compilation errors. Math Function Conflicts Fixed: - sinpi(double x): Add noexcept(true) specification - sinpif(float x): Add noexcept(true) specification - cospi(double x): Add noexcept(true) specification - cospif(float x): Add noexcept(true) specification Patch Implementation: - Use sed to modify /usr/local/cuda-12.9/include/crt/math_functions.h at build time - Add noexcept(true) to the four conflicting function declarations - Maintains compatibility with both CUDA 12.9 and glibc 2.41 This resolves the compilation errors: "error: exception specification is incompatible with that of previous function" GPU detection and system setup already working perfectly: - 5x RTX 3060 GPUs detected correctly ✅ - PyTorch CUDA compatibility confirmed ✅ - Triton 3.4.0 installation successful ✅ - RTX 30/40 optimization strategy selected ✅ With this fix, Sage Attention should compile successfully on Debian Trixie while maintaining the slim image approach and all current functionality. References: - NVIDIA Developer Forums: https://forums.developer.nvidia.com/t/323591 - Known issue with CUDA 12.9 + glibc 2.41 in multiple projects |
||
|---|---|---|
| .ci | ||
| .github | ||
| alembic_db | ||
| api_server | ||
| app | ||
| comfy | ||
| comfy_api | ||
| comfy_api_nodes | ||
| comfy_config | ||
| comfy_execution | ||
| comfy_extras | ||
| custom_nodes | ||
| input | ||
| middleware | ||
| models | ||
| output | ||
| script_examples | ||
| tests | ||
| tests-unit | ||
| utils | ||
| .gitattributes | ||
| .gitignore | ||
| alembic.ini | ||
| CODEOWNERS | ||
| comfyui_version.py | ||
| CONTRIBUTING.md | ||
| cuda_malloc.py | ||
| Dockerfile | ||
| entrypoint.sh | ||
| execution.py | ||
| extra_model_paths.yaml.example | ||
| folder_paths.py | ||
| hook_breaker_ac10a0.py | ||
| latent_preview.py | ||
| LICENSE | ||
| main.py | ||
| new_updater.py | ||
| node_helpers.py | ||
| nodes.py | ||
| protocol.py | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| server.py | ||
ComfyUI-Docker
An automated Repo for ComfyUI Docker image builds, optimized for NVIDIA GPUs.
About • Features • Getting Started • Usage • License
About
This repository automates the creation of Docker images for ComfyUI, a powerful and modular stable diffusion GUI and backend. It syncs with the upstream ComfyUI repository, builds a Docker image on new releases, and pushes it to GitHub Container Registry (GHCR).
I created this repo for myself as a simple way to stay up to date with the latest ComfyUI versions while having an easy-to-use Docker image. It's particularly suited for setups with NVIDIA GPUs, leveraging CUDA for accelerated performance.
Built With
- Docker
- GitHub Actions for automation
- PyTorch with CUDA support
- Based on Python 3.12 slim image
Features
- Automated Sync & Build: Daily checks for upstream releases, auto-merges changes, and builds/pushes Docker images.
- NVIDIA GPU Ready: Pre-configured with CUDA-enabled PyTorch for seamless GPU acceleration.
- Non-Root Runtime: Runs as a non-root user for better security.
- Pre-Installed Manager: Includes ComfyUI-Manager for easy node/extensions management.
- SageAttention 2.2 baked in: The image compiles SageAttention 2.2/2++ from the upstream repository during docker build, ensuring the latest kernels for modern NVIDIA GPUs are included by default.
- Auto-enabled at launch: ComfyUI is started with the
--use-sage-attentionflag so SageAttention is activated automatically on startup (no extra steps required).
Getting Started
Prerequisites
- Docker: Installed on your host (e.g., Docker Desktop or Engine).
- NVIDIA GPU: For GPU support (ensure NVIDIA drivers and CUDA are installed on the host).
- NVIDIA Container Toolkit: For GPU passthrough in Docker (install via the official guide).
Pulling the Image
The latest image is available on GHCR:
docker pull ghcr.io/clsferguson/comfyui-docker:latest
For a specific version (synced with upstream tags, starting at 0.3.57):
docker pull ghcr.io/clsferguson/comfyui-docker:vX.Y.Z
Docker Compose
For easier management, use this docker-compose.yml:
services:
comfyui:
image: ghcr.io/clsferguson/comfyui-docker:latest
container_name: ComfyUI
runtime: nvidia
restart: unless-stopped
ports:
- 8188:8188
environment:
- TZ=America/Edmonton
- PUID=1000
- GUID=1000
gpus: all
volumes:
- comfyui_data:/app/ComfyUI/user/default
- comfyui_nodes:/app/ComfyUI/custom_nodes
- /mnt/comfyui/models:/app/ComfyUI/models
- /mnt/comfyui/input:/app/ComfyUI/input
- /mnt/comfyui/output:/app/ComfyUI/output
Run with docker compose up -d.
Usage
Basic Usage
Access ComfyUI at http://localhost:8188 after starting the container using Docker Compose.
SageAttention
- SageAttention 2.2 is built into the image and enabled automatically on startup via
--use-sage-attention. - To verify, check the container logs on startup; ComfyUI will print a line indicating SageAttention is active.
Environment Variables
- Set via
.envfile or-eflags indocker composeordocker run.
License
Distributed under the MIT License (same as upstream ComfyUI). See LICENSE for more information.
Contact
- Creator: clsferguson - GitHub
- Project Link: https://github.com/clsferguson/ComfyUI-Docker
Built with ❤️ for easy AI workflows.