Enhance entrypoint script with robust error handling, PyTorch validation, and RTX 50 support PyTorch CUDA Validation: - Add test_pytorch_cuda() function to verify CUDA availability and enumerate devices - Display compute capabilities for all detected GPUs during startup - Validate PyTorch installation before attempting Sage Attention builds Enhanced GPU Detection: - Update RTX 50 series architecture targeting to compute capability 12.0 (sm_120) - Improve mixed-generation GPU handling with better compatibility logic - Add comprehensive logging for GPU detection and strategy selection Triton Version Management: - Add intelligent fallback system for Triton installation failures - RTX 50 series: Try latest → pre-release → stable fallback chain - RTX 20 series: Enforce Triton 3.2.0 for compatibility - Enhanced error recovery when specific versions fail Build Error Handling: - Add proper error propagation throughout Sage Attention build process - Implement graceful degradation when builds fail (ComfyUI still starts) - Comprehensive logging for troubleshooting build issues - Better cleanup and recovery from partial build failures Architecture-Specific Optimizations: - Proper TORCH_CUDA_ARCH_LIST targeting for mixed GPU environments - RTX 50 series: Use sm_120 for Blackwell architecture support - Multi-GPU compilation targeting prevents architecture mismatches - Intelligent version selection (v1.0 for RTX 20, v2.2 for modern GPUs) Command Line Integration: - Enhanced argument handling preserves user-provided flags - Automatic --use-sage-attention injection when builds succeed - Support for both default startup and custom user commands - SAGE_ATTENTION_AVAILABLE environment variable for external integration This transforms the entrypoint from a basic startup script into a comprehensive GPU optimization and build management system with enterprise-grade error handling. |
||
|---|---|---|
| .ci | ||
| .github | ||
| alembic_db | ||
| api_server | ||
| app | ||
| comfy | ||
| comfy_api | ||
| comfy_api_nodes | ||
| comfy_config | ||
| comfy_execution | ||
| comfy_extras | ||
| custom_nodes | ||
| input | ||
| middleware | ||
| models | ||
| output | ||
| script_examples | ||
| tests | ||
| tests-unit | ||
| utils | ||
| .gitattributes | ||
| .gitignore | ||
| alembic.ini | ||
| CODEOWNERS | ||
| comfyui_version.py | ||
| CONTRIBUTING.md | ||
| cuda_malloc.py | ||
| Dockerfile | ||
| entrypoint.sh | ||
| execution.py | ||
| extra_model_paths.yaml.example | ||
| folder_paths.py | ||
| hook_breaker_ac10a0.py | ||
| latent_preview.py | ||
| LICENSE | ||
| main.py | ||
| new_updater.py | ||
| node_helpers.py | ||
| nodes.py | ||
| protocol.py | ||
| pyproject.toml | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| server.py | ||
ComfyUI-Docker
An automated Repo for ComfyUI Docker image builds, optimized for NVIDIA GPUs.
About • Features • Getting Started • Usage • License
About
This repository automates the creation of Docker images for ComfyUI, a powerful and modular stable diffusion GUI and backend. It syncs with the upstream ComfyUI repository, builds a Docker image on new releases, and pushes it to GitHub Container Registry (GHCR).
I created this repo for myself as a simple way to stay up to date with the latest ComfyUI versions while having an easy-to-use Docker image. It's particularly suited for setups with NVIDIA GPUs, leveraging CUDA for accelerated performance.
Built With
- Docker
- GitHub Actions for automation
- PyTorch with CUDA support
- Based on Python 3.12 slim image
Features
- Automated Sync & Build: Daily checks for upstream releases, auto-merges changes, and builds/pushes Docker images.
- NVIDIA GPU Ready: Pre-configured with CUDA-enabled PyTorch for seamless GPU acceleration.
- Non-Root Runtime: Runs as a non-root user for better security.
- Pre-Installed Manager: Includes ComfyUI-Manager for easy node/extensions management.
- SageAttention 2.2 baked in: The image compiles SageAttention 2.2/2++ from the upstream repository during docker build, ensuring the latest kernels for modern NVIDIA GPUs are included by default.
- Auto-enabled at launch: ComfyUI is started with the
--use-sage-attentionflag so SageAttention is activated automatically on startup (no extra steps required).
Getting Started
Prerequisites
- Docker: Installed on your host (e.g., Docker Desktop or Engine).
- NVIDIA GPU: For GPU support (ensure NVIDIA drivers and CUDA are installed on the host).
- NVIDIA Container Toolkit: For GPU passthrough in Docker (install via the official guide).
Pulling the Image
The latest image is available on GHCR:
docker pull ghcr.io/clsferguson/comfyui-docker:latest
For a specific version (synced with upstream tags, starting at 0.3.57):
docker pull ghcr.io/clsferguson/comfyui-docker:vX.Y.Z
Docker Compose
For easier management, use this docker-compose.yml:
services:
comfyui:
image: ghcr.io/clsferguson/comfyui-docker:latest
container_name: ComfyUI
runtime: nvidia
restart: unless-stopped
ports:
- 8188:8188
environment:
- TZ=America/Edmonton
- PUID=1000
- GUID=1000
gpus: all
volumes:
- comfyui_data:/app/ComfyUI/user/default
- comfyui_nodes:/app/ComfyUI/custom_nodes
- /mnt/comfyui/models:/app/ComfyUI/models
- /mnt/comfyui/input:/app/ComfyUI/input
- /mnt/comfyui/output:/app/ComfyUI/output
Run with docker compose up -d.
Usage
Basic Usage
Access ComfyUI at http://localhost:8188 after starting the container using Docker Compose.
SageAttention
- SageAttention 2.2 is built into the image and enabled automatically on startup via
--use-sage-attention. - To verify, check the container logs on startup; ComfyUI will print a line indicating SageAttention is active.
Environment Variables
- Set via
.envfile or-eflags indocker composeordocker run.
License
Distributed under the MIT License (same as upstream ComfyUI). See LICENSE for more information.
Contact
- Creator: clsferguson - GitHub
- Project Link: https://github.com/clsferguson/ComfyUI-Docker
Built with ❤️ for easy AI workflows.