Switch from python:3.12-slim-trixie to a multi-stage NVIDIA CUDA 12.9 Ubuntu 22.04 build: use devel for compile (nvcc) and runtime for final image. Compile SageAttention 2.2+ from upstream source during image build by resolving the latest commit and installing without build isolation for a deterministic wheel. Install Triton (>=3.0.0) alongside Torch cu129 and start ComfyUI with --use-sage-attention by default. Add SAGE_FORCE_REFRESH build-arg to re-resolve the ref and bust cache when needed. This improves reproducibility, reduces startup latency, and keeps nvcc out of production for a smaller final image.
Switch to a two-stage Dockerfile that builds SageAttention 2.2 from source on python:3.12-slim-trixie by explicitly enabling contrib/non-free/non-free-firmware in APT and installing Debian’s nvidia-cuda-toolkit (nvcc) for compilation, then installs the produced cp312 wheel into the slim runtime so --use-sage-attention works at startup. The builder installs Torch cu129 to match the runtime for ABI compatibility and uses pip’s --break-system-packages to avoid a venv while respecting PEP 668 in a controlled way, keeping layers lean and avoiding the prior sources.list and space issues seen on GitHub runners. The final image remains minimal while bundling an up-to-date SageAttention build aligned with the Torch/CUDA stack in use.
Replace job‑level continue‑on‑error with a step‑level setting and export build_succeeded from the docker/build‑push step to drive the fallback condition, guaranteeing the self‑hosted job runs whenever the GitHub runner fails (e.g., disk space) instead of being masked by a successful job conclusion. Update publish/finalize gating to rely on the explicit output flag (or self‑hosted success) so releases proceed only when at least one build path publishes successfully.
Switch to a two-stage build that uses python:3.12-slim-trixie as both builder and runtime, enabling contrib/non-free/non-free-firmware in APT to install Debian’s nvidia-cuda-toolkit (nvcc) for compiling SageAttention 2.2 from source. Install Torch cu129 in the builder and build a cp312 wheel, then copy and install that wheel into the slim runtime so --use-sage-attention works at startup. This removes the heavy CUDA devel base, avoids a venv by permitting pip system installs during build, and keeps the final image minimal while ensuring ABI alignment with Torch cu129.
Switch the builder stage to nvidia/cuda:12.9.0-devel-ubuntu24.04 and create a Python 3.12 venv to avoid PEP 668 “externally managed” errors, install Torch 2.8.0+cu129 in that venv, and build a cp312 SageAttention 2.2 wheel from upstream; copy and install the wheel in the slim runtime so --use-sage-attention works at startup.
This resolves prior build failures on Debian Trixie slim where CUDA toolkits were unavailable and fixes runtime ModuleNotFoundError by ensuring the module is present in the exact interpreter ComfyUI uses.
Switch the builder stage to an NVIDIA CUDA devel image (12.9.0) to provide nvcc and headers, shallow‑clone SageAttention, and build a cp312 wheel against the same Torch (2.8.0+cu129) as the runtime; copy and install the wheel into the slim runtime to ensure the module is present at launch. This replaces the previous approach that only added the launch flag and failed at runtime with ModuleNotFoundError, and avoids apt failures for CUDA packages on Debian Trixie slim while keeping the final image minimal and ABI‑aligned.
Introduce a two-stage Docker build that compiles SageAttention 2.2/2++ from the upstream repository using Debian’s CUDA toolkit (nvcc) and the same Torch stack (cu129) as the runtime, then installs the produced wheel in the final slim image. This ensures the sageattention module is present at launch and makes the existing --use-sage-attention flag functional. The runtime image remains minimal while the builder stage carries heavy toolchains; matching Torch across stages prevents CUDA/ABI mismatch. Also retains the previous launch command so ComfyUI auto-enables SageAttention on startup.
Update README to reflect that SageAttention 2.2/2++ is compiled into the
image at build time and enabled automatically on launch using
--use-sage-attention. Clarifies NVIDIA GPU setup expectations and that no
extra steps are required to activate SageAttention in container runs.
Changes:
- Features: add “SageAttention 2.2 baked in” and “Auto-enabled at launch”.
- Getting Started: note that SageAttention is compiled during docker build
and requires no manual install.
- Docker Compose: confirm the image launches with SageAttention enabled by default.
- Usage: add a SageAttention subsection with startup log verification notes.
- General cleanup and wording to align with current image behavior.
No functional code changes; documentation only.
Adds a multi-stage Docker build that compiles SageAttention 2.2/2++ from the upstream repository head into a wheel using nvcc, then installs it into the slim runtime to keep images small. Ensures the builder installs the same Torch CUDA 12.9 stack as the runtime so the compiled extension ABI matches at load time. Shallow clones the SageAttention repo during build to always pull the latest version on each new image build. Updates the container launch to pass --use-sage-attention so ComfyUI enables SageAttention at startup when the package is present. This change keeps the runtime minimal while delivering up-to-date, high-performance attention kernels for modern NVIDIA GPUs in ComfyUI.
Introduce a run() helper that shell-quotes and prints each command before execution, and use it for mkdir/chown/chmod in the /usr/local-only Python target loop. This makes permission and path fixes visible in logs for easier debugging, preserves existing error-tolerance with || true, and remains compatible with set -euo pipefail and the runuser re-exec (runs only in the root branch). No functional changes beyond added verbosity; non-/usr/local paths remain no-op.
This updates ComfyUI-Manager on container launch using a shallow fetch/reset pattern and cleans untracked files to ensure a fresh working tree, which is the recommended way to refresh depth‑1 clones without full history. It also installs all detected requirements.txt files with pip --upgrade and only-if-needed strategy so direct requirements are upgraded within constraints on each run, while still excluding Manager from wheel-builds to avoid setuptools flat‑layout errors.
This adds av>=14.2 to satisfy Comfy’s API-node canary, ensuring video/audio nodes import without error, and uses the standard PyTorch CUDA 12.9 index URL syntax for reliability. It also installs nvidia-ml-py to align with the ecosystem shift away from deprecated pynvml, reducing future NVML warnings while preserving current functionality. The rest of the base remains unchanged, and existing ComfyUI requirements continue to install as before.
Update the Dockerfile to use python:3.12.11-slim-trixie to align with available cp312 wheels (notably MediaPipe) and avoid 3.13 ABI gaps, add cmake alongside build-essential to support native builds like dlib, keep the CUDA-enabled PyTorch install via the vendor index, and leave user/workdir/entrypoint/port settings unchanged to preserve runtime behavior.
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options
* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added
* Fix memory usage issue with inspect
* Made WAN attention receive transformer_options, test node added to wan to test out attention override later
* Added **kwargs to all attention functions so transformer_options could potentially be passed through
* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention
* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)
* Make flux work with optimized_attention_override
* Add logs to verify optimized_attention_override is passed all the way into attention function
* Make Qwen work with optimized_attention_override
* Made hidream work with optimized_attention_override
* Made wan patches_replace work with optimized_attention_override
* Made SD3 work with optimized_attention_override
* Made HunyuanVideo work with optimized_attention_override
* Made Mochi work with optimized_attention_override
* Made LTX work with optimized_attention_override
* Made StableAudio work with optimized_attention_override
* Made optimized_attention_override work with ACE Step
* Made Hunyuan3D work with optimized_attention_override
* Make CosmosPredict2 work with optimized_attention_override
* Made CosmosVideo work with optimized_attention_override
* Made Omnigen 2 work with optimized_attention_override
* Made StableCascade work with optimized_attention_override
* Made AuraFlow work with optimized_attention_override
* Made Lumina work with optimized_attention_override
* Made Chroma work with optimized_attention_override
* Made SVD work with optimized_attention_override
* Fix WanI2VCrossAttention so that it expects to receive transformer_options
* Fixed Wan2.1 Fun Camera transformer_options passthrough
* Fixed WAN 2.1 VACE transformer_options passthrough
* Add optimized to get_attention_function
* Disable attention logs for now
* Remove attention logging code
* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case
* Satisfy ruff
* Remove AttentionOverrideTest node, that's something to cook up for later