Commit Graph

4179 Commits

Author SHA1 Message Date
Alexander Piskun
b682a73c55
enable Seedance Pro model in the FirstLastFrame node (#10120) 2025-09-30 10:43:41 -07:00
clsferguson
893e76e908
feat(entrypoint): ensure ORT CUDA at runtime and unify GPU probe via torch; fix Manager package ops (pip/uv) and preinstall toml
- Add runtime guard to verify ONNX Runtime has CUDAExecutionProvider; if missing, uninstall CPU-only onnxruntime and install onnxruntime-gpu, then re-verify providers.
- Replace early gpu checks with one torch-based probe that detects devices and compute capability, sets DET_* flags, TORCH_CUDA_ARCH_LIST, and SAGE_STRATEGY, and exits fast when CC < 7.5.
- Ensure python -m pip is available (bootstrap with ensurepip if necessary) so ComfyUI-Manager can run package operations during prestartup.
- Install uv system-wide to /usr/local/bin if missing (UV_UNMANAGED_INSTALL) for a fast package manager alternative without modifying shell profiles.
- Preinstall toml if its import fails to avoid Manager import errors before Manager runs its own install steps.
2025-09-30 11:30:22 -06:00
clsferguson
08a12867d1
feat(dockerfile): add Cairo/pkg-config for pycairo and define COMFYUI path env vars
- Install pkg-config, libcairo2, and libcairo2-dev so pip can build/use pycairo required by svglib/rlPyCairo, preventing meson/pkg-config “Dependency cairo not found” errors on Debian/Ubuntu bases.
- Define COMFYUI_PATH=/app/ComfyUI and both COMFYUI_MODEL_PATH=/app/ComfyUI/models and COMFYUI_MODELS_PATH=/app/ComfyUI/models to satisfy common tool conventions and silence CLI warnings, while remaining compatible with extra_model_paths.yaml for canonical model routing.
2025-09-30 11:29:25 -06:00
Alexander Piskun
631b9ae861
fix(Rodin3D-Gen2): missing "task_uuid" parameter (#10128) 2025-09-30 10:21:47 -07:00
clsferguson
a632e1c5be
fix(entrypoint): install only root requirements.txt and install.py per node; remove wildcards and recursion
- Replace wildcard/recursive requirements scanning with a per-node loop that installs only each node’s top-level requirements.txt and runs install.py when present, aligning behavior with ComfyUI-Manager and preventing unintended subfolder or variant requirements from being applied.
- Drop automatic pyproject.toml/setup.py installs to avoid packaging nodes unnecessarily; ComfyUI loads nodes from custom_nodes directly.
- Keep user-level pip and permissions hardening so ComfyUI-Manager can later manage deps without permission errors.
2025-09-30 10:23:29 -06:00
comfyanonymous
f48d7230de
Add new portable links to readme. (#10112) 2025-09-30 12:17:49 -04:00
clsferguson
16652fb90a
feat(dockerfile): add CuPy (CUDA 12.x), keep wheel-only installs, and align CUDA headers with CUDA 12.9 toolchain
- Add cupy-cuda12x to base image so CuPy installs from wheels during build without requiring a GPU, matching CUDA 12.x runtime and avoiding compilation on GitHub runners; this pairs with existing CUDA 12.9 libs and ensures CuPy is ready for GPU hosts at runtime. 
- Keep PyTorch CUDA 12.9, Triton, and media libs; no features removed. 
- This change follows CuPy’s guidance to install cupy-cuda12x via pip for CUDA 12.x, which expects CUDA headers present via cuda-cudart-dev-12-x (already in image) or the nvidia-cuda-runtime-cu12 PyPI package path if needed, consistent with our Debian CUDA 12.9 setup.
2025-09-29 22:38:19 -06:00
clsferguson
b0b95e5cc5
feat(entrypoint): fail-fast when no compatible NVIDIA GPU, mirror Manager’s dependency install steps, and harden permissions for Manager operations
- Add an early runtime check that exits cleanly when no compatible NVIDIA GPU is detected, preventing unnecessary installs and builds on hosts without GPUs, which matches the repo’s requirement to target recent-gen NVIDIA GPUs and avoids work on GitHub runners. 
- Mirror ComfyUI-Manager’s dependency behavior for custom nodes by: installing requirements*.txt and requirements/*.txt, building nodes with pyproject.toml using pip, and invoking node-provided install.py scripts when present, aligning with documented custom-node install flows. 
- Enforce user-level pip installs (PIP_USER=1) and ensure /usr/local site-packages trees are owned and writable by the runtime user; this resolves permission-denied errors seen when Manager updates or removes packages (e.g., numpy __pycache__), improving reliability of Manager-driven installs and uninstalls.
2025-09-29 22:36:35 -06:00
comfyanonymous
6e079abc3a
Workflow permission fix. (#10110) 2025-09-29 23:11:37 -04:00
comfyanonymous
977a4ed8c5 ComfyUI version 0.3.61 2025-09-29 23:04:42 -04:00
comfyanonymous
414a178fb6
Add basic readme for AMD portable. (#10109) 2025-09-29 23:03:02 -04:00
comfyanonymous
447884b657
Make stable release workflow callable. (#10108) 2025-09-29 20:37:51 -04:00
comfyanonymous
bed4b49d08
Add action to do the full stable release. (#10107) 2025-09-29 20:31:15 -04:00
comfyanonymous
342cf644ce
Add a way to have different names for stable nvidia portables. (#10106) 2025-09-29 20:05:44 -04:00
comfyanonymous
3758848423
Different base files for nvidia and amd portables. (#10105) 2025-09-29 19:54:37 -04:00
comfyanonymous
0db6aabed3
Different base files for different release. (#10104) 2025-09-29 19:54:05 -04:00
comfyanonymous
1673ace19b
Make the final release test optional in the stable release action. (#10103) 2025-09-29 19:08:42 -04:00
comfyanonymous
7f38e4c538
Add action to create cached deps with manually specified torch. (#10102) 2025-09-29 17:27:52 -04:00
Alexander Piskun
8accf50908
convert nodes_mahiro.py to V3 schema (#10070) 2025-09-29 12:35:51 -07:00
Christian Byrne
ed0f4a609b
dont cache new locale entry points (#10101) 2025-09-29 12:16:02 -07:00
Alexander Piskun
041b8824f5
convert nodes_perpneg.py to V3 schema (#10081) 2025-09-29 12:05:28 -07:00
Alexander Piskun
b1111c2062
convert nodes_mochi.py to V3 schema (#10069) 2025-09-29 12:03:35 -07:00
Alexander Piskun
05a258efd8
add WanImageToImageApi node (#10094) 2025-09-29 12:01:04 -07:00
ComfyUI Wiki
c8276f8c6b
Update template to 0.1.91 (#10096) 2025-09-29 11:59:42 -07:00
Changrz
6ec1cfe101
[Rodin3d api nodes] Updated the name of the save file path (changed from timestamp to UUID). (#10011)
* Update savepath name from time to uuid

* delete lib
2025-09-29 11:59:12 -07:00
comfyanonymous
b60dc31627
Update command to install latest nighly pytorch. (#10085) 2025-09-28 13:41:32 -04:00
comfyanonymous
555f902fc1
Fix stable workflow creating multiple draft releases. (#10067) 2025-09-27 22:43:25 -04:00
Rui Wang (王瑞)
1364548c72
feat: ComfyUI can be run on the specified Ascend NPU (#9663)
* feature: Set the Ascend NPU to use a single one

* Enable the `--cuda-device` parameter to support both CUDA and Ascend NPUs simultaneously.

* Make the code just set the ASCENT_RT_VISIBLE_DEVICES environment variable without any other edits to master branch

---------

Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>
2025-09-27 22:36:02 -04:00
Alexander Piskun
2dadb34860
convert nodes_hypertile.py to V3 schema (#10061) 2025-09-27 19:16:22 -07:00
Alexander Piskun
1cf86f5ae5
convert nodes_lumina2.py to V3 schema (#10058) 2025-09-27 19:12:51 -07:00
Alexander Piskun
a1127b232d
convert nodes_lotus.py to V3 schema (#10057) 2025-09-27 19:11:36 -07:00
comfyanonymous
896f2e653c
Fix typo in release workflow. (#10066) 2025-09-27 21:30:35 -04:00
comfyanonymous
40ae495ddc
Improvements to the stable release workflow. (#10065) 2025-09-27 20:28:49 -04:00
rattus128
653ceab414
Reduce Peak WAN inference VRAM usage - part II (#10062)
* flux: math: Use _addcmul to avoid expensive VRAM intermediate

The rope process can be the VRAM peak and this intermediate
for the addition result before releasing the original can OOM.
addcmul_ it.

* wan: Delete the self attention before cross attention

This saves VRAM when the cross attention and FFN are in play as the
VRAM peak.
2025-09-27 18:14:16 -04:00
Alexander Piskun
160698eb41
convert nodes_qwen.py to V3 schema (#10049) 2025-09-27 12:25:35 -07:00
Alexander Piskun
7eca95657c
convert nodes_photomaker.py to V3 schema (#10017) 2025-09-27 02:36:43 -07:00
Alexander Piskun
ad5aef2d0c
convert nodes_pixart.py to V3 schema (#10019) 2025-09-27 02:34:32 -07:00
Alexander Piskun
bcfd80dd79
convert nodes_luma.py to V3 schema (#10030) 2025-09-27 02:28:11 -07:00
Alexander Piskun
6b4b671ce7
convert nodes_bfl.py to V3 schema (#10033) 2025-09-27 02:27:01 -07:00
Alexander Piskun
a9cf1cd249
convert nodes_hidream.py to V3 schema (#9946) 2025-09-26 23:13:05 -07:00
clsferguson
f6d49f33b7
entrypoint: derive correct arch list; add user-tunable build parallelism; fix Sage flags; first-run installs
- Auto-derive TORCH_CUDA_ARCH_LIST from torch device capabilities (unique, sorted, optional +PTX) to cover all charted GPUs:
  Turing 7.5, Ampere 8.0/8.6/8.7, Ada 8.9, Hopper 9.0, and Blackwell 10.0 & 12.0/12.1; add name-based fallbacks for mixed or torch-less scenarios.
- Add user-tunable build parallelism with SAGE_MAX_JOBS (preferred) and MAX_JOBS (alias) that cap PyTorch cpp_extension/ninja -j; fall back to a RAM/CPU heuristic to prevent OOM “Killed” during CUDA/C++ builds.
- Correct Sage flags: SAGE_ATTENTION_AVAILABLE only signals “built/installed,” while FORCE_SAGE_ATTENTION=1 enables Sage at startup; fix logs to reference FORCE_SAGE_ATTENTION.
- Maintain Triton install strategy by GPU generation for compatibility and performance.
- Add first-run dependency installation with COMFY_FORCE_INSTALL override; keep permissions bootstrap and minor logging/URL cleanups.
2025-09-26 22:37:24 -06:00
Christian Byrne
255572188f
Add workflow templates version tracking to system_stats (#9089)
Adds installed and required workflow templates version information to the
/system_stats endpoint, allowing the frontend to detect and notify users
when their templates package is outdated.

- Add get_installed_templates_version() and get_required_templates_version()
  methods to FrontendManager
- Include templates version info in system_stats response
- Add comprehensive unit tests for the new functionality
2025-09-26 21:29:13 -07:00
ComfyUI Wiki
0572029fee
Update template to 0.1.88 (#10046) 2025-09-26 21:18:16 -07:00
Jedrzej Kosinski
196954ab8c
Add 'input_cond' and 'input_uncond' to the args dictionary passed into sampler_cfg_function (#10044) 2025-09-26 19:55:03 -07:00
clsferguson
45b87c7c99
Refactor entrypoint: first-run installs, fix Sage flags, arch map, logs
Introduce a first-run flag to install custom_nodes dependencies only on the
initial container start, with COMFY_FORCE_INSTALL=1 to override on demand;
correct Sage Attention flag semantics so SAGE_ATTENTION_AVAILABLE=1 only
indicates the build is present while FORCE_SAGE_ATTENTION=1 enables it at
startup; fix the misleading log to reference FORCE_SAGE_ATTENTION. Update
TORCH_CUDA_ARCH_LIST mapping to 7.5 (Turing), 8.6 (Ampere), 8.9 (Ada), and
10.0 (Blackwell/RTX 50); retain Triton strategy with a compatibility pin on
Turing and latest for Blackwell, including fallbacks. Clean up git clone URLs,
standardize on python -m pip, and tighten logs; preserve user remapping and
strategy-based rebuild detection via the .built flag.
2025-09-26 20:04:35 -06:00
clsferguson
7ee4f37971
fix(bootstrap): valid git URLs, dynamic CUDA archs, +PTX fallback
Replace Markdown-style links in git clone with standard HTTPS URLs so the
repository actually clones under bash.
Derive TORCH_CUDA_ARCH_LIST from PyTorch devices and add +PTX to the
highest architecture for forward-compat extension builds.
Warn explicitly on Blackwell (sm_120) when the active torch/CUDA build
lacks support, prompting an upgrade to torch with CUDA 12.8+.
Keep pip --no-cache-dir, preserve Triton pin for Turing, and retain
idempotent ComfyUI-Manager update logic.
2025-09-26 19:11:46 -06:00
clsferguson
231082e2a6
rollback entrypoint.sh
issues with script, rollback to an older modified version,
2025-09-26 18:52:38 -06:00
clsferguson
555b7d5606
feat(entrypoint): safer builds, dynamic CUDA archs, corrected git clone, first-run override, clarified Sage flags
Cap build parallelism via MAX_JOBS (override SAGEATTENTION_MAX_JOBS) and
CMAKE_BUILD_PARALLEL_LEVEL to prevent OOM kills during nvcc/cc1plus when
ninja fanout is high in constrained containers.

Compute TORCH_CUDA_ARCH_LIST from torch.cuda device properties to target
exact GPU SMs across mixed setups; keep human-readable nvidia-smi logs.

Move PATH/PYTHONPATH exports earlier and use `python -m pip` with
`--no-cache-dir` consistently to avoid stale caches and reduce image bloat.

Fix git clone/update commands to standard HTTPS and reset against
origin/HEAD; keep shallow operations for speed and reproducibility.

Clarify Sage Attention flags: set SAGE_ATTENTION_AVAILABLE only when
module import succeeds; require FORCE_SAGE_ATTENTION=1 to enable at boot.

Keep first-run dependency installation with COMFY_AUTO_INSTALL=1 override
to re-run installs on later boots without removing the first-run flag.
2025-09-26 18:19:23 -06:00
comfyanonymous
1e098d6132
Don't add template to qwen2.5vl when template is in prompt. (#10043)
Make the hunyuan image refiner template_end 36.
2025-09-26 18:34:17 -04:00
clsferguson
30ed9ae7cf
Fix entrypoint.sh
Removed escapes in python version.
2025-09-26 15:15:58 -06:00