- Add pre-build cleanup on GitHub-hosted runner using jlumbroso/free-disk-space plus Docker builder/system prune to maximize available disk for Docker builds.
- Add pre-build Docker cache pruning and disk checks on self-hosted runner to keep it minimal and appropriate for ephemeral runners.
- Change fallback logic to run self-hosted only if the GitHub-hosted build fails, using needs.<job>.result with always() to ensure the fallback job triggers after a primary failure.
- Keep GHCR login via docker/login-action@v3 and Buildx via docker/setup-buildx-action@v3; build with docker/build-push-action@v6.
- Publish release only if either build path succeeds; fail workflow if both builds or release publish fail.
- Remove post-build cleanup steps (BuildKit image removal and general pruning) to align with instruction not to worry about post cleanup on ephemeral runners.
- Add early GPU probe that first tries nvidia-smi, then torch, with compute capability >= 7.5 gating; write a pass flag to avoid reprobe; exit 42 otherwise to prevent unnecessary work.
- Move GPU detection before any user/permission operations to stop repeated permission logs on restarts.
- Replace bracketed Markdown URLs with plain URLs in git commands.
- Gate on new upstream release whose tag matches comfyui_version.py; skip if already released locally.
- Sync master from upstream (keep local README.md), then build and push image on GH runner with pre-clean; fallback to ephemeral self-hosted if GH build fails; publish release if either path succeeds.
- Remove unnecessary post-job cleanup since runners are ephemeral; rely on setup-buildx cleanup.
- Keep pre-build cleanup on GH runners (free-disk-space action and Docker builder/system prune) to prevent ENOSPC during builds.
- Remove post-job prune steps for both GH and ephemeral self-hosted runners since runners are discarded after the job and setup-buildx uses cleanup=true to remove builders automatically.
- Retain fallback: self-hosted build runs only if the GH build fails; publish succeeds if either path succeeds; final job fails only if both builds fail.
- Remove libcairo2 from apt since libcairo2-dev already depends on and installs it; avoids redundant listing while keeping Cairo headers needed for builds.
- Add onnxruntime-gpu to Python dependencies so CUDAExecutionProvider is available without runtime installation steps.
- Drop runtime ONNX Runtime installer/check block that used a heredoc followed by a brace group, causing “unexpected end of file”.
- Keep Manager pip preflight and toml preinstall; retain unified torch-based GPU probe and SageAttention flow.
- Add “Free Disk Space (Ubuntu)” and Docker prune steps before/after the GitHub-hosted build to recover tens of GB and avoid “no space left on device” failures on ubuntu-latest.
- Remove continue-on-error and gate the self-hosted job with `always() && needs.build-gh.result != 'success'` so it runs only if the GH build fails, while publish proceeds if either path succeeds.
- Enable buildx GHA cache (cache-from/cache-to) to minimize runner disk pressure and rebuild times without loading images locally.
- Drop runtime installer for uv; uv is now baked into the image via Dockerfile.
- Keep pip preflight (ensurepip) and toml preinstall to satisfy ComfyUI-Manager’s prestartup requirements.
- Retain unified torch-based GPU probe, SageAttention setup, custom node install flow, and ONNX Runtime CUDA provider guard.
- Copy uv and uvx from ghcr.io/astral-sh/uv:latest into /usr/local/bin to provide a fast package manager at build time without curl, always fetching the newest release. [web:200]
- Keeps image GPU-agnostic and improves cold-starts while entrypoint retains pip fallback for robustness in multiuser environments. [web:185]
- Add runtime guard to verify ONNX Runtime has CUDAExecutionProvider; if missing, uninstall CPU-only onnxruntime and install onnxruntime-gpu, then re-verify providers.
- Replace early gpu checks with one torch-based probe that detects devices and compute capability, sets DET_* flags, TORCH_CUDA_ARCH_LIST, and SAGE_STRATEGY, and exits fast when CC < 7.5.
- Ensure python -m pip is available (bootstrap with ensurepip if necessary) so ComfyUI-Manager can run package operations during prestartup.
- Install uv system-wide to /usr/local/bin if missing (UV_UNMANAGED_INSTALL) for a fast package manager alternative without modifying shell profiles.
- Preinstall toml if its import fails to avoid Manager import errors before Manager runs its own install steps.
- Install pkg-config, libcairo2, and libcairo2-dev so pip can build/use pycairo required by svglib/rlPyCairo, preventing meson/pkg-config “Dependency cairo not found” errors on Debian/Ubuntu bases.
- Define COMFYUI_PATH=/app/ComfyUI and both COMFYUI_MODEL_PATH=/app/ComfyUI/models and COMFYUI_MODELS_PATH=/app/ComfyUI/models to satisfy common tool conventions and silence CLI warnings, while remaining compatible with extra_model_paths.yaml for canonical model routing.
- Replace wildcard/recursive requirements scanning with a per-node loop that installs only each node’s top-level requirements.txt and runs install.py when present, aligning behavior with ComfyUI-Manager and preventing unintended subfolder or variant requirements from being applied.
- Drop automatic pyproject.toml/setup.py installs to avoid packaging nodes unnecessarily; ComfyUI loads nodes from custom_nodes directly.
- Keep user-level pip and permissions hardening so ComfyUI-Manager can later manage deps without permission errors.
- Add cupy-cuda12x to base image so CuPy installs from wheels during build without requiring a GPU, matching CUDA 12.x runtime and avoiding compilation on GitHub runners; this pairs with existing CUDA 12.9 libs and ensures CuPy is ready for GPU hosts at runtime.
- Keep PyTorch CUDA 12.9, Triton, and media libs; no features removed.
- This change follows CuPy’s guidance to install cupy-cuda12x via pip for CUDA 12.x, which expects CUDA headers present via cuda-cudart-dev-12-x (already in image) or the nvidia-cuda-runtime-cu12 PyPI package path if needed, consistent with our Debian CUDA 12.9 setup.
- Add an early runtime check that exits cleanly when no compatible NVIDIA GPU is detected, preventing unnecessary installs and builds on hosts without GPUs, which matches the repo’s requirement to target recent-gen NVIDIA GPUs and avoids work on GitHub runners.
- Mirror ComfyUI-Manager’s dependency behavior for custom nodes by: installing requirements*.txt and requirements/*.txt, building nodes with pyproject.toml using pip, and invoking node-provided install.py scripts when present, aligning with documented custom-node install flows.
- Enforce user-level pip installs (PIP_USER=1) and ensure /usr/local site-packages trees are owned and writable by the runtime user; this resolves permission-denied errors seen when Manager updates or removes packages (e.g., numpy __pycache__), improving reliability of Manager-driven installs and uninstalls.
* feature: Set the Ascend NPU to use a single one
* Enable the `--cuda-device` parameter to support both CUDA and Ascend NPUs simultaneously.
* Make the code just set the ASCENT_RT_VISIBLE_DEVICES environment variable without any other edits to master branch
---------
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>