- Replace wildcard/recursive requirements scanning with a per-node loop that installs only each node’s top-level requirements.txt and runs install.py when present, aligning behavior with ComfyUI-Manager and preventing unintended subfolder or variant requirements from being applied.
- Drop automatic pyproject.toml/setup.py installs to avoid packaging nodes unnecessarily; ComfyUI loads nodes from custom_nodes directly.
- Keep user-level pip and permissions hardening so ComfyUI-Manager can later manage deps without permission errors.
- Add cupy-cuda12x to base image so CuPy installs from wheels during build without requiring a GPU, matching CUDA 12.x runtime and avoiding compilation on GitHub runners; this pairs with existing CUDA 12.9 libs and ensures CuPy is ready for GPU hosts at runtime.
- Keep PyTorch CUDA 12.9, Triton, and media libs; no features removed.
- This change follows CuPy’s guidance to install cupy-cuda12x via pip for CUDA 12.x, which expects CUDA headers present via cuda-cudart-dev-12-x (already in image) or the nvidia-cuda-runtime-cu12 PyPI package path if needed, consistent with our Debian CUDA 12.9 setup.
- Add an early runtime check that exits cleanly when no compatible NVIDIA GPU is detected, preventing unnecessary installs and builds on hosts without GPUs, which matches the repo’s requirement to target recent-gen NVIDIA GPUs and avoids work on GitHub runners.
- Mirror ComfyUI-Manager’s dependency behavior for custom nodes by: installing requirements*.txt and requirements/*.txt, building nodes with pyproject.toml using pip, and invoking node-provided install.py scripts when present, aligning with documented custom-node install flows.
- Enforce user-level pip installs (PIP_USER=1) and ensure /usr/local site-packages trees are owned and writable by the runtime user; this resolves permission-denied errors seen when Manager updates or removes packages (e.g., numpy __pycache__), improving reliability of Manager-driven installs and uninstalls.
* feature: Set the Ascend NPU to use a single one
* Enable the `--cuda-device` parameter to support both CUDA and Ascend NPUs simultaneously.
* Make the code just set the ASCENT_RT_VISIBLE_DEVICES environment variable without any other edits to master branch
---------
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>
* flux: math: Use _addcmul to avoid expensive VRAM intermediate
The rope process can be the VRAM peak and this intermediate
for the addition result before releasing the original can OOM.
addcmul_ it.
* wan: Delete the self attention before cross attention
This saves VRAM when the cross attention and FFN are in play as the
VRAM peak.
- Auto-derive TORCH_CUDA_ARCH_LIST from torch device capabilities (unique, sorted, optional +PTX) to cover all charted GPUs:
Turing 7.5, Ampere 8.0/8.6/8.7, Ada 8.9, Hopper 9.0, and Blackwell 10.0 & 12.0/12.1; add name-based fallbacks for mixed or torch-less scenarios.
- Add user-tunable build parallelism with SAGE_MAX_JOBS (preferred) and MAX_JOBS (alias) that cap PyTorch cpp_extension/ninja -j; fall back to a RAM/CPU heuristic to prevent OOM “Killed” during CUDA/C++ builds.
- Correct Sage flags: SAGE_ATTENTION_AVAILABLE only signals “built/installed,” while FORCE_SAGE_ATTENTION=1 enables Sage at startup; fix logs to reference FORCE_SAGE_ATTENTION.
- Maintain Triton install strategy by GPU generation for compatibility and performance.
- Add first-run dependency installation with COMFY_FORCE_INSTALL override; keep permissions bootstrap and minor logging/URL cleanups.
Adds installed and required workflow templates version information to the
/system_stats endpoint, allowing the frontend to detect and notify users
when their templates package is outdated.
- Add get_installed_templates_version() and get_required_templates_version()
methods to FrontendManager
- Include templates version info in system_stats response
- Add comprehensive unit tests for the new functionality
Introduce a first-run flag to install custom_nodes dependencies only on the
initial container start, with COMFY_FORCE_INSTALL=1 to override on demand;
correct Sage Attention flag semantics so SAGE_ATTENTION_AVAILABLE=1 only
indicates the build is present while FORCE_SAGE_ATTENTION=1 enables it at
startup; fix the misleading log to reference FORCE_SAGE_ATTENTION. Update
TORCH_CUDA_ARCH_LIST mapping to 7.5 (Turing), 8.6 (Ampere), 8.9 (Ada), and
10.0 (Blackwell/RTX 50); retain Triton strategy with a compatibility pin on
Turing and latest for Blackwell, including fallbacks. Clean up git clone URLs,
standardize on python -m pip, and tighten logs; preserve user remapping and
strategy-based rebuild detection via the .built flag.
Replace Markdown-style links in git clone with standard HTTPS URLs so the
repository actually clones under bash.
Derive TORCH_CUDA_ARCH_LIST from PyTorch devices and add +PTX to the
highest architecture for forward-compat extension builds.
Warn explicitly on Blackwell (sm_120) when the active torch/CUDA build
lacks support, prompting an upgrade to torch with CUDA 12.8+.
Keep pip --no-cache-dir, preserve Triton pin for Turing, and retain
idempotent ComfyUI-Manager update logic.
Cap build parallelism via MAX_JOBS (override SAGEATTENTION_MAX_JOBS) and
CMAKE_BUILD_PARALLEL_LEVEL to prevent OOM kills during nvcc/cc1plus when
ninja fanout is high in constrained containers.
Compute TORCH_CUDA_ARCH_LIST from torch.cuda device properties to target
exact GPU SMs across mixed setups; keep human-readable nvidia-smi logs.
Move PATH/PYTHONPATH exports earlier and use `python -m pip` with
`--no-cache-dir` consistently to avoid stale caches and reduce image bloat.
Fix git clone/update commands to standard HTTPS and reset against
origin/HEAD; keep shallow operations for speed and reproducibility.
Clarify Sage Attention flags: set SAGE_ATTENTION_AVAILABLE only when
module import succeeds; require FORCE_SAGE_ATTENTION=1 to enable at boot.
Keep first-run dependency installation with COMFY_AUTO_INSTALL=1 override
to re-run installs on later boots without removing the first-run flag.