Commit Graph

3957 Commits

Author SHA1 Message Date
clsferguson
f655b2a960
feat(build,docker): add multi-stage build to compile and bundle SageAttention 2.2; enable via --use-sage-attention
Introduce a two-stage Docker build that compiles SageAttention 2.2/2++ from the upstream repository using Debian’s CUDA toolkit (nvcc) and the same Torch stack (cu129) as the runtime, then installs the produced wheel in the final slim image. This ensures the sageattention module is present at launch and makes the existing --use-sage-attention flag functional. The runtime image remains minimal while the builder stage carries heavy toolchains; matching Torch across stages prevents CUDA/ABI mismatch. Also retains the previous launch command so ComfyUI auto-enables SageAttention on startup.
2025-09-21 21:45:26 -06:00
clsferguson
77f35a886c
docs(readme): document baked-in SageAttention 2.2 and default enable via --use-sage-attention
Update README to reflect that SageAttention 2.2/2++ is compiled into the
image at build time and enabled automatically on launch using
--use-sage-attention. Clarifies NVIDIA GPU setup expectations and that no
extra steps are required to activate SageAttention in container runs.

Changes:
- Features: add “SageAttention 2.2 baked in” and “Auto-enabled at launch”.
- Getting Started: note that SageAttention is compiled during docker build
  and requires no manual install.
- Docker Compose: confirm the image launches with SageAttention enabled by default.
- Usage: add a SageAttention subsection with startup log verification notes.
- General cleanup and wording to align with current image behavior.

No functional code changes; documentation only.
2025-09-21 21:12:04 -06:00
clsferguson
051c46b6dc
feat(build,docker): bake SageAttention 2.2 from source and enable in ComfyUI with --use-sage-attention
Adds a multi-stage Docker build that compiles SageAttention 2.2/2++ from the upstream repository head into a wheel using nvcc, then installs it into the slim runtime to keep images small. Ensures the builder installs the same Torch CUDA 12.9 stack as the runtime so the compiled extension ABI matches at load time. Shallow clones the SageAttention repo during build to always pull the latest version on each new image build. Updates the container launch to pass --use-sage-attention so ComfyUI enables SageAttention at startup when the package is present. This change keeps the runtime minimal while delivering up-to-date, high-performance attention kernels for modern NVIDIA GPUs in ComfyUI.
2025-09-21 21:03:24 -06:00
clsferguson
fb64caf236
chore(bootstrap): trace root-only setup via run()
Introduce a run() helper that shell-quotes and prints each command before execution, and use it for mkdir/chown/chmod in the /usr/local-only Python target loop. This makes permission and path fixes visible in logs for easier debugging, preserves existing error-tolerance with || true, and remains compatible with set -euo pipefail and the runuser re-exec (runs only in the root branch). No functional changes beyond added verbosity; non-/usr/local paths remain no-op.
2025-09-17 14:49:01 -06:00
clsferguson
c1451b099b
fix: escapes on quotation marks.
removed some escapes from some quotation marks that caused failure to start.
2025-09-17 13:03:09 -06:00
clsferguson
db506ae51c
fix: upgrade custom-node deps each start and shallow-update ComfyUI-Manager
This updates ComfyUI-Manager on container launch using a shallow fetch/reset pattern and cleans untracked files to ensure a fresh working tree, which is the recommended way to refresh depth‑1 clones without full history. It also installs all detected requirements.txt files with pip --upgrade and only-if-needed strategy so direct requirements are upgraded within constraints on each run, while still excluding Manager from wheel-builds to avoid setuptools flat‑layout errors.
2025-09-17 12:30:08 -06:00
clsferguson
db7f8730db
build: install PyAV 14+, add nvidia-ml-py, fix torch index
This adds av>=14.2 to satisfy Comfy’s API-node canary, ensuring video/audio nodes import without error, and uses the standard PyTorch CUDA 12.9 index URL syntax for reliability. It also installs nvidia-ml-py to align with the ecosystem shift away from deprecated pynvml, reducing future NVML warnings while preserving current functionality. The rest of the base remains unchanged, and existing ComfyUI requirements continue to install as before.
2025-09-17 12:09:26 -06:00
clsferguson
87b73d7322
Update README.md 2025-09-17 10:25:54 -06:00
clsferguson
d4b1a405f5
Switch to Python 3.12 base and add CMake for native builds
Update the Dockerfile to use python:3.12.11-slim-trixie to align with available cp312 wheels (notably MediaPipe) and avoid 3.13 ABI gaps, add cmake alongside build-essential to support native builds like dlib, keep the CUDA-enabled PyTorch install via the vendor index, and leave user/workdir/entrypoint/port settings unchanged to preserve runtime behavior.
2025-09-17 09:54:02 -06:00
clsferguson
ab630fcca0
Add cleanup step in sync-build-release workflow 2025-09-12 20:36:26 -06:00
clsferguson
c7989867d7
Refactor upstream release check and cleanup steps 2025-09-12 20:34:49 -06:00
GitHub Actions
b9600e36d4 Merge upstream/master, keep local README.md 2025-09-13 00:21:49 +00:00
comfyanonymous
a3b04de700
Hunyuan refiner vae now works with tiled. (#9836) 2025-09-12 19:46:46 -04:00
Jedrzej Kosinski
d7f40442f9
Enable Runtime Selection of Attention Functions (#9639)
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options

* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added

* Fix memory usage issue with inspect

* Made WAN attention receive transformer_options, test node added to wan to test out attention override later

* Added **kwargs to all attention functions so transformer_options could potentially be passed through

* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention

* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)

* Make flux work with optimized_attention_override

* Add logs to verify optimized_attention_override is passed all the way into attention function

* Make Qwen work with optimized_attention_override

* Made hidream work with optimized_attention_override

* Made wan patches_replace work with optimized_attention_override

* Made SD3 work with optimized_attention_override

* Made HunyuanVideo work with optimized_attention_override

* Made Mochi work with optimized_attention_override

* Made LTX work with optimized_attention_override

* Made StableAudio work with optimized_attention_override

* Made optimized_attention_override work with ACE Step

* Made Hunyuan3D work with optimized_attention_override

* Make CosmosPredict2 work with optimized_attention_override

* Made CosmosVideo work with optimized_attention_override

* Made Omnigen 2 work with optimized_attention_override

* Made StableCascade work with optimized_attention_override

* Made AuraFlow work with optimized_attention_override

* Made Lumina work with optimized_attention_override

* Made Chroma work with optimized_attention_override

* Made SVD work with optimized_attention_override

* Fix WanI2VCrossAttention so that it expects to receive transformer_options

* Fixed Wan2.1 Fun Camera transformer_options passthrough

* Fixed WAN 2.1 VACE transformer_options passthrough

* Add optimized to get_attention_function

* Disable attention logs for now

* Remove attention logging code

* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case

* Satisfy ruff

* Remove AttentionOverrideTest node, that's something to cook up for later
2025-09-12 18:07:38 -04:00
comfyanonymous
b149e2e1e3
Better way of doing the generator for the hunyuan image noise aug. (#9834) 2025-09-12 17:53:15 -04:00
Alexander Piskun
581bae2af3
convert Moonvalley API nodes to the V3 schema (#9698) 2025-09-12 17:41:26 -04:00
Alexander Piskun
af99928f22
convert Canny node to V3 schema (#9743) 2025-09-12 17:40:34 -04:00
Alexander Piskun
53c9c7d39a
convert CFG nodes to V3 schema (#9717) 2025-09-12 17:39:55 -04:00
Alexander Piskun
ba68e83f1c
convert nodes_cond.py to V3 schema (#9719) 2025-09-12 17:39:30 -04:00
Alexander Piskun
dcb8834983
convert Cosmos nodes to V3 schema (#9721) 2025-09-12 17:38:46 -04:00
Alexander Piskun
f9d2e4b742
convert WanCameraEmbedding node to V3 schema (#9714) 2025-09-12 17:38:12 -04:00
Alexander Piskun
45bc1f5c00
convert Minimax API nodes to the V3 schema (#9693) 2025-09-12 17:37:31 -04:00
Alexander Piskun
0aa074a420
add kling-v2-1 model to the KlingStartEndFrame node (#9630) 2025-09-12 17:29:03 -04:00
comfyanonymous
7757d5a657
Set default hunyuan refiner shift to 4.0 (#9833) 2025-09-12 16:40:12 -04:00
comfyanonymous
e600520f8a
Fix hunyuan refiner blownout colors at noise aug less than 0.25 (#9832) 2025-09-12 16:35:34 -04:00
comfyanonymous
fd2b820ec2
Add noise augmentation to hunyuan image refiner. (#9831)
This was missing and should help with colors being blown out.
2025-09-12 16:03:08 -04:00
clsferguson
0d76dc1b18
Implement finalize job for workflow outcomes
Added a finalize job to handle outcomes based on upstream releases and publish results.
2025-09-12 10:56:16 -06:00
clsferguson
1da5dc48e6
Refactor CI workflow conditions and cleanup steps
Updated conditions for build and publish steps in CI workflow.
2025-09-12 10:47:02 -06:00
Benjamin Lu
d6b977b2e6
Bump frontend to 1.26.11 (#9809) 2025-09-12 00:46:01 -04:00
Jedrzej Kosinski
15ec9ea958
Add Output to V3 Combo type to match what is possible with V1 (#9813) 2025-09-12 00:44:20 -04:00
comfyanonymous
33bd9ed9cb
Implement hunyuan image refiner model. (#9817) 2025-09-12 00:43:20 -04:00
comfyanonymous
18de0b2830
Fast preview for hunyuan image. (#9814) 2025-09-11 19:33:02 -04:00
clsferguson
327d7ea37f
Fix case pattern for directory ownership and permissions 2025-09-11 13:21:13 -06:00
ComfyUI Wiki
df6850fae8
Update template to 0.1.81 (#9811) 2025-09-11 14:59:26 -04:00
clsferguson
18bca70c8f
Improve logging and ownership management in entrypoint.sh 2025-09-11 10:13:25 -06:00
clsferguson
d303280af5
Refactor entrypoint.sh for improved logging and ownership 2025-09-11 09:57:29 -06:00
clsferguson
7b3b6fcbfd
Remove provenance and SBOM options from Docker builds 2025-09-10 22:53:05 -06:00
clsferguson
9df654da53
Remove provenance and sbom from build-release.yml
Removed 'provenance' and 'sbom' options from Docker build steps.
2025-09-10 22:50:24 -06:00
clsferguson
84cf9a63e7
Update release body message in workflow 2025-09-10 21:52:07 -06:00
clsferguson
a8ada2ac01
Add latest Docker image pull command to release notes
Updated release notes to include latest Docker image pull command.
2025-09-10 21:50:14 -06:00
clsferguson
4b11a769ad
Update release message for Docker image
Updated release body to indicate version sync from upstream.
2025-09-10 21:47:54 -06:00
clsferguson
97ac355d16
Add platform specification for Docker builds 2025-09-10 21:39:39 -06:00
clsferguson
7decaccfcd
Add platform specification for Docker builds 2025-09-10 21:38:58 -06:00
comfyanonymous
e01e99d075
Support hunyuan image distilled model. (#9807) 2025-09-10 23:17:34 -04:00
clsferguson
f5e15d6ec8
Add manual workflow for resume publishing 2025-09-10 21:11:35 -06:00
clsferguson
764e46efe0
Refactor GitHub Actions workflow for Docker builds 2025-09-10 21:07:18 -06:00
clsferguson
a86c49b5ff
Refactor publish and finalize conditions in workflow
Updated conditions for publishing and finalizing outcomes in the CI workflow.
2025-09-10 21:05:46 -06:00
GitHub Actions
b1552e1dc2 Merge upstream/master, keep local README.md 2025-09-11 00:24:08 +00:00
comfyanonymous
72212fef66 ComfyUI version 0.3.59 2025-09-10 17:25:41 -04:00
ComfyUI Wiki
df34f1549a
Update template to 0.1.78 (#9806)
* Update template to 0.1.77

* Update template to 0.1.78
2025-09-10 14:16:41 -07:00