Compare commits

...

12 Commits

Author SHA1 Message Date
Alexander Piskun
5f655ef74b
Merge 28d538ddf9 into a4b7e3beed 2026-05-10 00:03:29 +09:00
Comfy Org PR Bot
a4b7e3beed
Bump comfyui-frontend-package to 1.43.18 (#13809)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Build package / Build Test (3.10) (push) Waiting to run
Build package / Build Test (3.11) (push) Waiting to run
Build package / Build Test (3.12) (push) Waiting to run
Build package / Build Test (3.13) (push) Waiting to run
Build package / Build Test (3.14) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-05-09 07:53:10 -07:00
Alexander Piskun
7bbf1e8169
[Partner Nodes] Tripo3D 3.1 model (#13788)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
* feat(api-nodes): add Tripo3D 3.1 model

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* fix: price badges algo

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* [Partner Nodes] deprecate "quad" param for the TripoMultiviewToModel node

Signed-off-by: bigcat88 <bigcat88@icloud.com>

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>
2026-05-08 21:38:17 -07:00
lin-bot23
8b08bfdcbe
Add description field to blueprint subgraphs (#13797)
* Add description field to all blueprint subgraphs

Sets the 'description' field on every subgraph blueprint node,
which will show on the node preview and tooltip. Covers all 51
blueprint files under blueprints/.

* Update blueprint descriptions with researched model info

* Refine blueprint descriptions with researched model specs from docs

Updates subgraph descriptions across all 51 blueprints with accurate
model details drawn from ComfyUI docs, including:
- Flux.1 Dev: 12B open-weights, Pro-level quality
- Flux.2 Klein 4B: fastest Flux, distilled architecture
- Qwen-Image: 20B MMDiT, multilingual text rendering
- Z-Image-Turbo: distilled 6B DiT, sub-second inference
- LTX-2/2.3: 19B DiT audio-video foundation model
- Wan2.2: open-source, 14B/1.3B variants
- ACE-Step 1.5: ~1s full-song generation
- GPU shader nodes consistently labeled as fragment shaders

* Strip marketing fluff and license info from descriptions

* Fix Canny to Video (LTX 2.0) description

* Remove 'local-' prefix from subgraph names

* Preserve UTF-8 encoding in JSON files (ensure_ascii=False)

* Apply review suggestions from alexisrolland

- Rename 'Image to Model (Hunyuan3d 2.1)' -> 'Image to 3D Model (Hunyuan3d 2.1)'
- Rename 'Image Upscale(Z-image-Turbo)' -> 'Image Upscale (Z-image-Turbo)'
- Rename 'Video Inpaint(Wan2.1 VACE)' -> 'Video Inpaint (Wan 2.1 VACE)'
- Use 'Black Forest Labs' branding in Flux descriptions
- Use 'Google's Gemini' with possessive in captioning nodes
- Normalize 'Wan 2.2' and 'Wan 2.1' spacing in descriptions

* fix: revert Color Adjustment.json to preserve original GLSL shader content

Only adds the 'description' field without modifying the shader code
(which contained Unicode escape \\u2192 that should be preserved).

* Apply CodeRabbit review suggestions

- Color Adjustment: include vibrance in description
- Image Blur: expand to Gaussian/Box/Radial modes
- Flux.2 Klein 4B: narrow to image edit only (no T2I)
- NetaYume Lumina: correct model base (Neta Lumina, not Lumina-Next)

---------

Co-authored-by: linmoumou <linmoumou@linmoumoudeMac-mini.local>
Co-authored-by: Daxiong (Lin) <contact@comfyui-wiki.com>
2026-05-09 11:26:13 +08:00
Matt Miller
4e823431cc
Add cloud-runtime experiment node-schema endpoints to spec (#13806)
* Add cloud-runtime experiment node-schema endpoints to spec

Replace the GET operations at /api/experiment/nodes and
/api/experiment/nodes/{id} with getNodeInfoSchema and getNodeByID —
the optimized, ETag-tagged object_info schema endpoints the cloud
frontend depends on for the workflow editor.

Each operation is tagged x-runtime: [cloud] and uses the runtime-only
tag for cloud-side codegen exclusion. Response headers document the
ETag and Cache-Control validators; 304 Not Modified is declared for
RFC 7232 conditional GETs.

Remove the now-unused CloudNodeList schema to keep Spectral clean.

Co-authored-by: Matt Miller <MillerMedia@users.noreply.github.com>

* spec: document If-None-Match header on conditional GET endpoints

Both `getNodeInfoSchema` and `getNodeByID` advertise `ETag` response
headers and a `304 Not Modified` response, but the spec didn't declare
the `If-None-Match` request header that triggers conditional validation.
Adding it as an optional header parameter on both ops so client codegen
exposes the conditional-GET pattern.
2026-05-08 19:14:23 -07:00
comfyanonymous
66669b2ded
I don't think there was any because nobody complained. (#13807)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
2026-05-08 17:32:14 -07:00
Alexander Piskun
65045730a6
[Partner Nodes] additionally use Baidu server to detect the accessibility of internet (#13803)
Signed-off-by: bigcat88 <bigcat88@icloud.com>
2026-05-08 13:11:52 -07:00
Alexander Piskun
28d538ddf9
Merge branch 'master' into feat/string-min-max-length
Some checks failed
Build package / Build Test (3.11) (push) Has been cancelled
Build package / Build Test (3.12) (push) Has been cancelled
Build package / Build Test (3.13) (push) Has been cancelled
Build package / Build Test (3.14) (push) Has been cancelled
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Build package / Build Test (3.10) (push) Has been cancelled
2026-03-25 08:30:29 +02:00
Alexander Piskun
e326b41d62
Merge branch 'master' into feat/string-min-max-length
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Build package / Build Test (3.10) (push) Has been cancelled
Build package / Build Test (3.11) (push) Has been cancelled
Build package / Build Test (3.12) (push) Has been cancelled
Build package / Build Test (3.13) (push) Has been cancelled
Build package / Build Test (3.14) (push) Has been cancelled
2026-03-14 07:37:05 +02:00
Alexander Piskun
aa9e7a84bc
Merge branch 'master' into feat/string-min-max-length
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
2026-03-13 19:21:36 +02:00
Alexander Piskun
9b7a2a3248
Merge branch 'master' into feat/string-min-max-length
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Build package / Build Test (3.10) (push) Waiting to run
Build package / Build Test (3.11) (push) Waiting to run
Build package / Build Test (3.12) (push) Waiting to run
Build package / Build Test (3.13) (push) Waiting to run
Build package / Build Test (3.14) (push) Waiting to run
2026-03-13 06:51:41 +02:00
bigcat88
5b913f0377
feat: add minLength/maxLength validation for String inputs
Some checks failed
Python Linting / Run Ruff (push) Has been cancelled
Python Linting / Run Pylint (push) Has been cancelled
Build package / Build Test (3.10) (push) Has been cancelled
Build package / Build Test (3.11) (push) Has been cancelled
Build package / Build Test (3.12) (push) Has been cancelled
Build package / Build Test (3.13) (push) Has been cancelled
Build package / Build Test (3.14) (push) Has been cancelled
2026-03-09 09:00:33 +02:00
61 changed files with 417 additions and 238 deletions

View File

@ -431,9 +431,10 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adjusts image brightness and contrast using a real-time GPU fragment shader."
}
]
},
"extra": {}
}
}

View File

@ -162,7 +162,7 @@
},
"revision": 0,
"config": {},
"name": "local-Canny to Image (Z-Image-Turbo)",
"name": "Canny to Image (Z-Image-Turbo)",
"inputNode": {
"id": -10,
"bounding": [
@ -1553,7 +1553,8 @@
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"category": "Image generation and editing/Canny to image"
"category": "Image generation and editing/Canny to image",
"description": "Generates an image from a Canny edge map using Z-Image-Turbo, with text conditioning."
}
]
},
@ -1574,4 +1575,4 @@
}
},
"version": 0.4
}
}

View File

@ -192,7 +192,7 @@
},
"revision": 0,
"config": {},
"name": "local-Canny to Video (LTX 2.0)",
"name": "Canny to Video (LTX 2.0)",
"inputNode": {
"id": -10,
"bounding": [
@ -3600,7 +3600,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Canny to video"
"category": "Video generation and editing/Canny to video",
"description": "Generates video from Canny edge maps using LTX-2, with optional synchronized audio."
}
]
},
@ -3616,4 +3617,4 @@
}
},
"version": 0.4
}
}

View File

@ -377,8 +377,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adds lens-style chromatic aberration (color fringing) using a real-time GPU fragment shader."
}
]
}
}
}

View File

@ -596,7 +596,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adjusts saturation, temperature, tint, and vibrance using a real-time GPU fragment shader."
}
]
}

View File

@ -1129,7 +1129,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Balances colors across shadows, midtones, and highlights using a real-time GPU fragment shader."
}
]
}

View File

@ -608,7 +608,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Fine-tunes tone and color with per-channel curve adjustments using a real-time GPU fragment shader."
}
]
}

View File

@ -1609,7 +1609,8 @@
}
],
"extra": {},
"category": "Image Tools/Crop"
"category": "Image Tools/Crop",
"description": "Splits an image into a 2×2 grid of four equal tiles."
}
]
},

View File

@ -2946,7 +2946,8 @@
}
],
"extra": {},
"category": "Image Tools/Crop"
"category": "Image Tools/Crop",
"description": "Splits an image into a 3×3 grid of nine equal tiles."
}
]
},

View File

@ -1579,7 +1579,8 @@
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"category": "Image generation and editing/Depth to image"
"category": "Image generation and editing/Depth to image",
"description": "Generates an image from a depth map using Z-Image-Turbo with text conditioning."
},
{
"id": "458bdf3c-4b58-421c-af50-c9c663a4d74c",
@ -2461,7 +2462,8 @@
]
},
"workflowRendererVersion": "LG"
}
},
"description": "Estimates a monocular depth map from an input image using the Lotus depth estimation model."
}
]
},

View File

@ -4233,7 +4233,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Depth to video"
"category": "Video generation and editing/Depth to video",
"description": "Generates video from depth maps using LTX-2, with optional synchronized audio."
},
{
"id": "38b60539-50a7-42f9-a5fe-bdeca26272e2",
@ -5192,7 +5193,8 @@
],
"extra": {
"workflowRendererVersion": "LG"
}
},
"description": "Estimates a monocular depth map from an input image using the Lotus depth estimation model."
}
]
},

View File

@ -450,9 +450,10 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Blur"
"category": "Image Tools/Blur",
"description": "Applies bilateral (edge-preserving) blur to soften images while retaining detail."
}
]
},
"extra": {}
}
}

View File

@ -580,8 +580,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adds procedural film grain texture for a cinematic look via GPU fragment shader."
}
]
}
}
}

View File

@ -3350,7 +3350,8 @@
}
],
"extra": {},
"category": "Video generation and editing/First-Last-Frame to Video"
"category": "Video generation and editing/First-Last-Frame to Video",
"description": "Generates a video interpolating between first and last keyframes using LTX-2.3."
}
]
},

View File

@ -575,8 +575,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adds a glow/bloom effect around bright image areas via GPU fragment shader."
}
]
}
}
}

View File

@ -752,8 +752,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adjusts hue, saturation, and lightness of an image using a real-time GPU fragment shader."
}
]
}
}
}

View File

@ -374,7 +374,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Blur"
"category": "Image Tools/Blur",
"description": "Applies Gaussian, Box, or Radial blur to soften images and create stylized depth or motion effects."
}
]
}

View File

@ -310,7 +310,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Text generation/Image Captioning"
"category": "Text generation/Image Captioning",
"description": "Generates descriptive captions for images using Google's Gemini multimodal LLM."
}
]
}

View File

@ -315,8 +315,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Manipulates individual RGBA channels for masking, compositing, and channel effects."
}
]
}
}
}

View File

@ -2138,7 +2138,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Edit image"
"category": "Image generation and editing/Edit image",
"description": "Edits images via text instructions using FireRed Image Edit 1.1, a diffusion-based instruction-following editing model."
}
]
},

View File

@ -1472,7 +1472,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Edit image"
"category": "Image generation and editing/Edit image",
"description": "Edits an input image via text instructions using FLUX.2 [klein] 4B."
},
{
"id": "6007e698-2ebd-4917-84d8-299b35d7b7ab",
@ -1821,7 +1822,8 @@
],
"extra": {
"workflowRendererVersion": "LG"
}
},
"description": "Applies reference image conditioning for style/identity transfer (Flux.2 Klein 4B)."
}
]
},
@ -1837,4 +1839,4 @@
}
},
"version": 0.4
}
}

View File

@ -1417,7 +1417,8 @@
}
],
"extra": {},
"category": "Image generation and editing/Edit image"
"category": "Image generation and editing/Edit image",
"description": "Edits images via text instructions using LongCat Image Edit, an instruction-following image editing diffusion model."
}
]
},

View File

@ -132,7 +132,7 @@
},
"revision": 0,
"config": {},
"name": "local-Image Edit (Qwen 2511)",
"name": "Image Edit (Qwen 2511)",
"inputNode": {
"id": -10,
"bounding": [
@ -1468,7 +1468,8 @@
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"category": "Image generation and editing/Edit image"
"category": "Image generation and editing/Edit image",
"description": "Edits images via text instructions using Qwen-Image-Edit-2511 with improved character consistency and integrated LoRA."
}
]
},
@ -1489,4 +1490,4 @@
}
},
"version": 0.4
}
}

View File

@ -1188,7 +1188,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Inpaint image"
"category": "Image generation and editing/Inpaint image",
"description": "Inpaints masked image regions using Flux.1 fill [dev], Black Forest Labs' inpainting/outpainting model."
}
]
},
@ -1202,4 +1203,4 @@
},
"ue_links": []
}
}
}

View File

@ -1548,7 +1548,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Inpaint image"
"category": "Image generation and editing/Inpaint image",
"description": "Inpaints masked regions using Qwen-Image, extending its multilingual text rendering to inpainting tasks."
},
{
"id": "56a1f603-fbd2-40ed-94ef-c9ecbd96aca8",
@ -1907,7 +1908,8 @@
],
"extra": {
"workflowRendererVersion": "LG"
}
},
"description": "Expands and softens mask edges to reduce visible seams after image processing."
}
]
},

View File

@ -742,9 +742,10 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust"
"category": "Image Tools/Color adjust",
"description": "Adjusts black point, white point, and gamma for tonal range control via GPU shader."
}
]
},
"extra": {}
}
}

View File

@ -1919,7 +1919,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Outpaint image"
"category": "Image generation and editing/Outpaint image",
"description": "Outpaints beyond image boundaries using Qwen-Image's outpainting capabilities."
},
{
"id": "f93c215e-c393-460e-9534-ed2c3d8a652e",
@ -2278,7 +2279,8 @@
],
"extra": {
"workflowRendererVersion": "LG"
}
},
"description": "Expands and softens mask edges to reduce visible seams after image processing."
},
{
"id": "2a4b2cc0-db37-4302-a067-da392f38f06b",
@ -2733,7 +2735,8 @@
],
"extra": {
"workflowRendererVersion": "LG"
}
},
"description": "Scales both image and mask together while preserving alignment for editing workflows."
}
]
},

View File

@ -141,7 +141,7 @@
},
"revision": 0,
"config": {},
"name": "local-Image Upscale(Z-image-Turbo)",
"name": "Image Upscale (Z-image-Turbo)",
"inputNode": {
"id": -10,
"bounding": [
@ -1302,7 +1302,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Enhance"
"category": "Image generation and editing/Enhance",
"description": "Upscales images to higher resolution using Z-Image-Turbo."
}
]
},

View File

@ -99,7 +99,7 @@
},
"revision": 0,
"config": {},
"name": "local-Image to Depth Map (Lotus)",
"name": "Image to Depth Map (Lotus)",
"inputNode": {
"id": -10,
"bounding": [
@ -948,7 +948,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Depth to image"
"category": "Image generation and editing/Depth to image",
"description": "Estimates a monocular depth map from an input image using the Lotus depth estimation model."
}
]
},
@ -964,4 +965,4 @@
"workflowRendererVersion": "LG"
},
"version": 0.4
}
}

View File

@ -1586,7 +1586,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Image to layers"
"category": "Image generation and editing/Image to layers",
"description": "Decomposes an image into variable-resolution RGBA layers for independent editing using Qwen-Image-Layered."
}
]
},

View File

@ -72,7 +72,7 @@
},
"revision": 0,
"config": {},
"name": "local-Image to Model (Hunyuan3d 2.1)",
"name": "Image to 3D Model (Hunyuan3d 2.1)",
"inputNode": {
"id": -10,
"bounding": [
@ -765,7 +765,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "3D/Image to 3D Model"
"category": "3D/Image to 3D Model",
"description": "Generates 3D mesh models from a single input image using Hunyuan3D 2.0/2.1."
}
]
},

View File

@ -4223,7 +4223,8 @@
"extra": {
"workflowRendererVersion": "Vue-corrected"
},
"category": "Video generation and editing/Image to video"
"category": "Video generation and editing/Image to video",
"description": "Generates video from a single input image using LTX-2.3."
}
]
},

View File

@ -206,7 +206,7 @@
},
"revision": 0,
"config": {},
"name": "local-Image to Video (Wan 2.2)",
"name": "Image to Video (Wan 2.2)",
"inputNode": {
"id": -10,
"bounding": [
@ -2027,7 +2027,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Image to video"
"category": "Video generation and editing/Image to video",
"description": "Generates video from an image and text prompt using Wan 2.2, supporting T2V and I2V."
}
]
},

View File

@ -134,7 +134,7 @@
},
"revision": 0,
"config": {},
"name": "local-Pose to Image (Z-Image-Turbo)",
"name": "Pose to Image (Z-Image-Turbo)",
"inputNode": {
"id": -10,
"bounding": [
@ -1298,7 +1298,8 @@
"VHS_MetadataImage": true,
"VHS_KeepIntermediate": true
},
"category": "Image generation and editing/Pose to image"
"category": "Image generation and editing/Pose to image",
"description": "Generates an image from pose keypoints using Z-Image-Turbo with text conditioning."
}
]
},
@ -1319,4 +1320,4 @@
}
},
"version": 0.4
}
}

View File

@ -3870,7 +3870,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Pose to video"
"category": "Video generation and editing/Pose to video",
"description": "Generates video from pose reference frames using LTX-2, with optional synchronized audio."
}
]
},

View File

@ -270,9 +270,10 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Text generation/Prompt enhance"
"category": "Text generation/Prompt enhance",
"description": "Expands short text prompts into detailed descriptions using a text generation model for better generation quality."
}
]
},
"extra": {}
}
}

View File

@ -302,8 +302,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Sharpen"
"category": "Image Tools/Sharpen",
"description": "Sharpens image details using a GPU fragment shader for enhanced clarity."
}
]
}
}
}

View File

@ -222,7 +222,7 @@
},
"revision": 0,
"config": {},
"name": "local-Text to Audio (ACE-Step 1.5)",
"name": "Text to Audio (ACE-Step 1.5)",
"inputNode": {
"id": -10,
"bounding": [
@ -1502,7 +1502,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Audio/Music generation"
"category": "Audio/Music generation",
"description": "Generates audio/music from text prompts using ACE-Step 1.5, a diffusion-based audio generation model."
}
]
},
@ -1518,4 +1519,4 @@
}
},
"version": 0.4
}
}

View File

@ -1029,7 +1029,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image"
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Flux.1 [dev], Black Forest Labs' 12B diffusion model."
}
]
},
@ -1043,4 +1044,4 @@
},
"ue_links": []
}
}
}

View File

@ -1023,7 +1023,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image"
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Flux.1 Krea Dev, a Black Forest Labs × Krea collaboration variant."
}
]
},
@ -1037,4 +1038,4 @@
},
"ue_links": []
}
}
}

View File

@ -1104,7 +1104,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image"
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using NetaYume Lumina, fine-tuned from Neta Lumina for anime-style and illustration generation."
},
{
"id": "a07fdf06-1bda-4dac-bdbd-63ee8ebca1c9",
@ -1458,11 +1459,12 @@
],
"extra": {
"workflowRendererVersion": "LG"
}
},
"description": "Encodes a negative text prompt via CLIP for classifier-free guidance in anime-style generation (NetaYume Lumina)."
}
]
},
"extra": {
"ue_links": []
}
}
}

View File

@ -1941,7 +1941,8 @@
"extra": {
"workflowRendererVersion": "Vue-corrected"
},
"category": "Image generation and editing/Text to image"
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Qwen-Image-2512, with enhanced human realism and finer natural detail over the base version."
}
]
},

View File

@ -1873,7 +1873,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image"
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Qwen-Image, Alibaba's 20B MMDiT model with excellent multilingual text rendering."
}
]
},

View File

@ -149,7 +149,7 @@
},
"revision": 0,
"config": {},
"name": "local-Text to Image (Z-Image-Turbo)",
"name": "Text to Image (Z-Image-Turbo)",
"inputNode": {
"id": -10,
"bounding": [
@ -1054,7 +1054,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image"
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Z-Image-Turbo, Alibaba's distilled 6B DiT model."
}
]
},
@ -1075,4 +1076,4 @@
}
},
"version": 0.4
}
}

View File

@ -4286,7 +4286,8 @@
"extra": {
"workflowRendererVersion": "Vue-corrected"
},
"category": "Video generation and editing/Text to video"
"category": "Video generation and editing/Text to video",
"description": "Generates video from text prompts using LTX-2.3, Lightricks' video diffusion model."
}
]
},

View File

@ -1572,7 +1572,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Text to video"
"category": "Video generation and editing/Text to video",
"description": "Generates video from text prompts using Wan2.2, Alibaba's diffusion video model."
}
]
},
@ -1586,4 +1587,4 @@
"VHS_KeepIntermediate": true
},
"version": 0.4
}
}

View File

@ -434,8 +434,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Sharpen"
"category": "Image Tools/Sharpen",
"description": "Enhances edge contrast via unsharp masking for a sharper image appearance."
}
]
}
}
}

View File

@ -307,7 +307,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Text generation/Video Captioning"
"category": "Text generation/Video Captioning",
"description": "Generates descriptive captions for video input using Google's Gemini multimodal LLM."
}
]
}

View File

@ -165,7 +165,7 @@
},
"revision": 0,
"config": {},
"name": "local-Video Inpaint(Wan2.1 VACE)",
"name": "Video Inpaint (Wan 2.1 VACE)",
"inputNode": {
"id": -10,
"bounding": [
@ -2368,7 +2368,8 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Inpaint video"
"category": "Video generation and editing/Inpaint video",
"description": "Inpaints masked regions in video frames using Wan 2.1 VACE."
}
]
},

View File

@ -584,8 +584,9 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video Tools/Stitch videos"
"category": "Video Tools/Stitch videos",
"description": "Stitches multiple video clips into a single sequential video file."
}
]
}
}
}

View File

@ -412,9 +412,10 @@
"extra": {
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Enhance video"
"category": "Video generation and editing/Enhance video",
"description": "Upscales video to 4× resolution using a GAN-based upscaling model."
}
]
},
"extra": {}
}
}

View File

@ -1390,7 +1390,7 @@ def convert_old_quants(state_dict, model_prefix="", metadata={}):
k_out = "{}.weight_scale".format(layer)
if layer is not None:
layer_conf = {"format": "float8_e4m3fn"} # TODO: check if anyone did some non e4m3fn scaled checkpoints
layer_conf = {"format": "float8_e4m3fn"}
if full_precision_matrix_mult:
layer_conf["full_precision_matrix_mult"] = full_precision_matrix_mult
layers[layer] = layer_conf

View File

@ -327,11 +327,14 @@ class String(ComfyTypeIO):
'''String input.'''
def __init__(self, id: str, display_name: str=None, optional=False, tooltip: str=None, lazy: bool=None,
multiline=False, placeholder: str=None, default: str=None, dynamic_prompts: bool=None,
socketless: bool=None, force_input: bool=None, extra_dict=None, raw_link: bool=None, advanced: bool=None):
socketless: bool=None, force_input: bool=None, extra_dict=None, raw_link: bool=None, advanced: bool=None,
min_length: int=None, max_length: int=None):
super().__init__(id, display_name, optional, tooltip, lazy, default, socketless, None, force_input, extra_dict, raw_link, advanced)
self.multiline = multiline
self.placeholder = placeholder
self.dynamic_prompts = dynamic_prompts
self.min_length = min_length
self.max_length = max_length
self.default: str
def as_dict(self):
@ -339,6 +342,8 @@ class String(ComfyTypeIO):
"multiline": self.multiline,
"placeholder": self.placeholder,
"dynamicPrompts": self.dynamic_prompts,
"minLength": self.min_length,
"maxLength": self.max_length,
})
@comfytype(io_type="COMBO")

View File

@ -1,10 +1,11 @@
from __future__ import annotations
from enum import Enum
from typing import Optional, List, Dict, Any, Union
from typing import Optional, Any
from pydantic import BaseModel, Field, RootModel
class TripoModelVersion(str, Enum):
v3_1_20260211 = 'v3.1-20260211'
v3_0_20250812 = 'v3.0-20250812'
v2_5_20250123 = 'v2.5-20250123'
v2_0_20240919 = 'v2.0-20240919'
@ -142,7 +143,7 @@ class TripoFileEmptyReference(BaseModel):
pass
class TripoFileReference(RootModel):
root: Union[TripoFileTokenReference, TripoUrlReference, TripoObjectReference, TripoFileEmptyReference]
root: TripoFileTokenReference | TripoUrlReference | TripoObjectReference | TripoFileEmptyReference
class TripoGetStsTokenRequest(BaseModel):
format: str = Field(..., description='The format of the image')
@ -183,7 +184,7 @@ class TripoImageToModelRequest(BaseModel):
class TripoMultiviewToModelRequest(BaseModel):
type: TripoTaskType = TripoTaskType.MULTIVIEW_TO_MODEL
files: List[TripoFileReference] = Field(..., description='The file references to convert to a model')
files: list[TripoFileReference] = Field(..., description='The file references to convert to a model')
model_version: Optional[TripoModelVersion] = Field(None, description='The model version to use for generation')
orthographic_projection: Optional[bool] = Field(False, description='Whether to use orthographic projection')
face_limit: Optional[int] = Field(None, description='The number of faces to limit the generation to')
@ -251,27 +252,13 @@ class TripoConvertModelRequest(BaseModel):
with_animation: Optional[bool] = Field(None, description='Whether to include animations')
pack_uv: Optional[bool] = Field(None, description='Whether to pack the UVs')
bake: Optional[bool] = Field(None, description='Whether to bake the model')
part_names: Optional[List[str]] = Field(None, description='The names of the parts to include')
part_names: Optional[list[str]] = Field(None, description='The names of the parts to include')
fbx_preset: Optional[TripoFbxPreset] = Field(None, description='The preset for the FBX export')
export_vertex_colors: Optional[bool] = Field(None, description='Whether to export the vertex colors')
export_orientation: Optional[TripoOrientation] = Field(None, description='The orientation for the export')
animate_in_place: Optional[bool] = Field(None, description='Whether to animate in place')
class TripoTaskRequest(RootModel):
root: Union[
TripoTextToModelRequest,
TripoImageToModelRequest,
TripoMultiviewToModelRequest,
TripoTextureModelRequest,
TripoRefineModelRequest,
TripoAnimatePrerigcheckRequest,
TripoAnimateRigRequest,
TripoAnimateRetargetRequest,
TripoStylizeModelRequest,
TripoConvertModelRequest
]
class TripoTaskOutput(BaseModel):
model: Optional[str] = Field(None, description='URL to the model')
base_model: Optional[str] = Field(None, description='URL to the base model')
@ -283,12 +270,13 @@ class TripoTask(BaseModel):
task_id: str = Field(..., description='The task ID')
type: Optional[str] = Field(None, description='The type of task')
status: Optional[TripoTaskStatus] = Field(None, description='The status of the task')
input: Optional[Dict[str, Any]] = Field(None, description='The input parameters for the task')
input: Optional[dict[str, Any]] = Field(None, description='The input parameters for the task')
output: Optional[TripoTaskOutput] = Field(None, description='The output of the task')
progress: Optional[int] = Field(None, description='The progress of the task', ge=0, le=100)
create_time: Optional[int] = Field(None, description='The creation time of the task')
running_left_time: Optional[int] = Field(None, description='The estimated time left for the task')
queue_position: Optional[int] = Field(None, description='The position in the queue')
consumed_credit: int | None = Field(None)
class TripoTaskResponse(BaseModel):
code: int = Field(0, description='The response code')
@ -296,7 +284,7 @@ class TripoTaskResponse(BaseModel):
class TripoGeneralResponse(BaseModel):
code: int = Field(0, description='The response code')
data: Dict[str, str] = Field(..., description='The task ID data')
data: dict[str, str] = Field(..., description='The task ID data')
class TripoBalanceData(BaseModel):
balance: float = Field(..., description='The account balance')

View File

@ -60,6 +60,7 @@ async def poll_until_finished(
],
status_extractor=lambda x: x.data.status,
progress_extractor=lambda x: x.data.progress,
price_extractor=lambda x: x.data.consumed_credit * 0.01 if x.data.consumed_credit else None,
estimated_duration=average_duration,
)
if response_poll.data.status == TripoTaskStatus.SUCCESS:
@ -113,7 +114,6 @@ class TripoTextToModelNode(IO.ComfyNode):
depends_on=IO.PriceBadgeDepends(
widgets=[
"model_version",
"style",
"texture",
"pbr",
"quad",
@ -124,20 +124,17 @@ class TripoTextToModelNode(IO.ComfyNode):
expr="""
(
$isV14 := $contains(widgets.model_version,"v1.4");
$style := widgets.style;
$hasStyle := ($style != "" and $style != "none");
$isV3OrLater := $contains(widgets.model_version,"v3.");
$withTexture := widgets.texture or widgets.pbr;
$isHdTexture := (widgets.texture_quality = "detailed");
$isDetailedGeometry := (widgets.geometry_quality = "detailed");
$baseCredits :=
$isV14 ? 20 : ($withTexture ? 20 : 10);
$credits :=
$baseCredits
+ ($hasStyle ? 5 : 0)
$credits := $isV14 ? 20 : (
($withTexture ? 20 : 10)
+ (widgets.quad ? 5 : 0)
+ ($isHdTexture ? 10 : 0)
+ ($isDetailedGeometry ? 20 : 0);
{"type":"usd","usd": $round($credits * 0.01, 2)}
+ (($isDetailedGeometry and $isV3OrLater) ? 20 : 0)
);
{"type":"usd","usd": $round($credits * 0.01, 2), "format": {"approximate": true}}
)
""",
),
@ -239,7 +236,6 @@ class TripoImageToModelNode(IO.ComfyNode):
depends_on=IO.PriceBadgeDepends(
widgets=[
"model_version",
"style",
"texture",
"pbr",
"quad",
@ -250,20 +246,17 @@ class TripoImageToModelNode(IO.ComfyNode):
expr="""
(
$isV14 := $contains(widgets.model_version,"v1.4");
$style := widgets.style;
$hasStyle := ($style != "" and $style != "none");
$isV3OrLater := $contains(widgets.model_version,"v3.");
$withTexture := widgets.texture or widgets.pbr;
$isHdTexture := (widgets.texture_quality = "detailed");
$isDetailedGeometry := (widgets.geometry_quality = "detailed");
$baseCredits :=
$isV14 ? 30 : ($withTexture ? 30 : 20);
$credits :=
$baseCredits
+ ($hasStyle ? 5 : 0)
$credits := $isV14 ? 30 : (
($withTexture ? 30 : 20)
+ (widgets.quad ? 5 : 0)
+ ($isHdTexture ? 10 : 0)
+ ($isDetailedGeometry ? 20 : 0);
{"type":"usd","usd": $round($credits * 0.01, 2)}
+ (($isDetailedGeometry and $isV3OrLater) ? 20 : 0)
);
{"type":"usd","usd": $round($credits * 0.01, 2), "format": {"approximate": true}}
)
""",
),
@ -358,7 +351,7 @@ class TripoMultiviewToModelNode(IO.ComfyNode):
"texture_alignment", default="original_image", options=["original_image", "geometry"], optional=True, advanced=True
),
IO.Int.Input("face_limit", default=-1, min=-1, max=500000, optional=True, advanced=True),
IO.Boolean.Input("quad", default=False, optional=True, advanced=True),
IO.Boolean.Input("quad", default=False, optional=True, advanced=True, tooltip="This parameter is deprecated and does nothing."),
IO.Combo.Input("geometry_quality", default="standard", options=["standard", "detailed"], optional=True, advanced=True),
],
outputs=[
@ -379,7 +372,6 @@ class TripoMultiviewToModelNode(IO.ComfyNode):
"model_version",
"texture",
"pbr",
"quad",
"texture_quality",
"geometry_quality",
],
@ -387,17 +379,16 @@ class TripoMultiviewToModelNode(IO.ComfyNode):
expr="""
(
$isV14 := $contains(widgets.model_version,"v1.4");
$isV3OrLater := $contains(widgets.model_version,"v3.");
$withTexture := widgets.texture or widgets.pbr;
$isHdTexture := (widgets.texture_quality = "detailed");
$isDetailedGeometry := (widgets.geometry_quality = "detailed");
$baseCredits :=
$isV14 ? 30 : ($withTexture ? 30 : 20);
$credits :=
$baseCredits
+ (widgets.quad ? 5 : 0)
$credits := $isV14 ? 30 : (
($withTexture ? 30 : 20)
+ ($isHdTexture ? 10 : 0)
+ ($isDetailedGeometry ? 20 : 0);
{"type":"usd","usd": $round($credits * 0.01, 2)}
+ (($isDetailedGeometry and $isV3OrLater) ? 20 : 0)
);
{"type":"usd","usd": $round($credits * 0.01, 2), "format": {"approximate": true}}
)
""",
),
@ -457,7 +448,7 @@ class TripoMultiviewToModelNode(IO.ComfyNode):
geometry_quality=geometry_quality,
texture_alignment=texture_alignment,
face_limit=face_limit if face_limit != -1 else None,
quad=quad,
quad=None,
),
)
return await poll_until_finished(cls, response, average_duration=80)
@ -498,7 +489,7 @@ class TripoTextureNode(IO.ComfyNode):
expr="""
(
$tq := widgets.texture_quality;
{"type":"usd","usd": ($contains($tq,"detailed") ? 0.2 : 0.1)}
{"type":"usd","usd": ($contains($tq,"detailed") ? 0.2 : 0.1), "format": {"approximate": true}}
)
""",
),
@ -555,7 +546,7 @@ class TripoRefineNode(IO.ComfyNode):
is_api_node=True,
is_output_node=True,
price_badge=IO.PriceBadge(
expr="""{"type":"usd","usd":0.3}""",
expr="""{"type":"usd","usd":0.3, "format": {"approximate": true}}""",
),
)
@ -592,7 +583,7 @@ class TripoRigNode(IO.ComfyNode):
is_api_node=True,
is_output_node=True,
price_badge=IO.PriceBadge(
expr="""{"type":"usd","usd":0.25}""",
expr="""{"type":"usd","usd":0.25, "format": {"approximate": true}}""",
),
)
@ -652,7 +643,7 @@ class TripoRetargetNode(IO.ComfyNode):
is_api_node=True,
is_output_node=True,
price_badge=IO.PriceBadge(
expr="""{"type":"usd","usd":0.1}""",
expr="""{"type":"usd","usd":0.1, "format": {"approximate": true}}""",
),
)
@ -761,19 +752,10 @@ class TripoConversionNode(IO.ComfyNode):
"face_limit",
"texture_size",
"texture_format",
"force_symmetry",
"flatten_bottom",
"flatten_bottom_threshold",
"pivot_to_center_bottom",
"scale_factor",
"with_animation",
"pack_uv",
"bake",
"part_names",
"fbx_preset",
"export_vertex_colors",
"export_orientation",
"animate_in_place",
],
),
expr="""
@ -783,28 +765,16 @@ class TripoConversionNode(IO.ComfyNode):
$flatThresh := (widgets.flatten_bottom_threshold != null) ? widgets.flatten_bottom_threshold : 0;
$scale := (widgets.scale_factor != null) ? widgets.scale_factor : 1;
$texFmt := (widgets.texture_format != "" ? widgets.texture_format : "jpeg");
$part := widgets.part_names;
$fbx := (widgets.fbx_preset != "" ? widgets.fbx_preset : "blender");
$orient := (widgets.export_orientation != "" ? widgets.export_orientation : "default");
$advanced :=
widgets.quad or
widgets.force_symmetry or
widgets.flatten_bottom or
widgets.pivot_to_center_bottom or
widgets.with_animation or
widgets.pack_uv or
widgets.bake or
widgets.export_vertex_colors or
widgets.animate_in_place or
($face != -1) or
($texSize != 4096) or
($flatThresh != 0) or
($scale != 1) or
($texFmt != "jpeg") or
($part != "") or
($fbx != "blender") or
($orient != "default");
{"type":"usd","usd": ($advanced ? 0.1 : 0.05)}
($texFmt != "jpeg");
{"type":"usd","usd": ($advanced ? 0.1 : 0.05), "format": {"approximate": true}}
)
""",
),

View File

@ -488,10 +488,30 @@ async def _diagnose_connectivity() -> dict[str, bool]:
"api_accessible": False,
}
timeout = aiohttp.ClientTimeout(total=5.0)
# Probe Google and Baidu in parallel: Google is blocked by the GFW in mainland China, so a Baidu probe is required
# to correctly detect that Chinese users with working internet do have working internet.
internet_probe_urls = ("https://www.google.com", "https://www.baidu.com")
async with aiohttp.ClientSession(timeout=timeout) as session:
with contextlib.suppress(ClientError, OSError):
async with session.get("https://www.google.com") as resp:
results["internet_accessible"] = resp.status < 500
async def _probe(url: str) -> bool:
try:
async with session.get(url) as resp:
return resp.status < 500
except (ClientError, OSError, asyncio.TimeoutError):
return False
probe_tasks = [asyncio.create_task(_probe(u)) for u in internet_probe_urls]
try:
for fut in asyncio.as_completed(probe_tasks):
if await fut:
results["internet_accessible"] = True
break
finally:
for t in probe_tasks:
if not t.done():
t.cancel()
await asyncio.gather(*probe_tasks, return_exceptions=True)
if not results["internet_accessible"]:
return results

View File

@ -83,7 +83,7 @@ class IsChangedCache:
return self.is_changed[node_id]
# Intentionally do not use cached outputs here. We only want constants in IS_CHANGED
input_data_all, _, v3_data = get_input_data(node["inputs"], class_def, node_id, None)
input_data_all, _, v3_data, _ = get_input_data(node["inputs"], class_def, node_id, None)
try:
is_changed = await _async_map_node_over_list(self.prompt_id, node_id, class_def, input_data_all, is_changed_name, v3_data=v3_data)
is_changed = await resolve_map_node_over_list_results(is_changed)
@ -215,7 +215,35 @@ def get_input_data(inputs, class_def, unique_id, execution_list=None, dynprompt=
if h[x] == "API_KEY_COMFY_ORG":
input_data_all[x] = [extra_data.get("api_key_comfy_org", None)]
v3_data["hidden_inputs"] = hidden_inputs_v3
return input_data_all, missing_keys, v3_data
return input_data_all, missing_keys, v3_data, valid_inputs
def validate_resolved_inputs(input_data_all, class_def, valid_inputs):
"""Validate resolved input values against schema constraints.
This is needed because validate_inputs() only sees direct widget values.
Linked inputs aren't resolved during validate_inputs(), so this runs after resolution to catch any violations.
"""
for x, values in input_data_all.items():
input_type, input_category, extra_info = get_input_info(class_def, x, valid_inputs)
if input_type != "STRING":
continue
min_length = extra_info.get("minLength")
max_length = extra_info.get("maxLength")
if min_length is None and max_length is None:
continue
for val in values:
if val is None or not isinstance(val, str):
continue
if min_length is not None and len(val) < min_length:
raise ValueError(
f"Input '{x}': value length {len(val)} is shorter than "
f"minimum length of {min_length}"
)
if max_length is not None and len(val) > max_length:
raise ValueError(
f"Input '{x}': value length {len(val)} is longer than "
f"maximum length of {max_length}"
)
map_node_over_list = None #Don't hook this please
@ -480,7 +508,7 @@ async def execute(server, dynprompt, caches, current_item, extra_data, executed,
has_subgraph = False
else:
get_progress_state().start_progress(unique_id)
input_data_all, missing_keys, v3_data = get_input_data(inputs, class_def, unique_id, execution_list, dynprompt, extra_data)
input_data_all, missing_keys, v3_data, valid_inputs = get_input_data(inputs, class_def, unique_id, execution_list, dynprompt, extra_data)
if server.client_id is not None:
server.last_node_id = display_node_id
server.send_sync("executing", { "node": unique_id, "display_node": display_node_id, "prompt_id": prompt_id }, server.client_id)
@ -509,6 +537,8 @@ async def execute(server, dynprompt, caches, current_item, extra_data, executed,
execution_list.make_input_strong_link(unique_id, i)
return (ExecutionResult.PENDING, None, None)
validate_resolved_inputs(input_data_all, class_def, valid_inputs)
def execution_block_cb(block):
if block.message is not None:
mes = {
@ -1014,6 +1044,34 @@ async def validate_inputs(prompt_id, prompt, item, validated, visiting=None):
errors.append(error)
continue
if input_type == "STRING":
if "minLength" in extra_info and len(val) < extra_info["minLength"]:
error = {
"type": "value_shorter_than_min_length",
"message": "Value length {} shorter than min length of {}".format(len(val), extra_info["minLength"]),
"details": f"{x}",
"extra_info": {
"input_name": x,
"input_config": info,
"received_value": val,
}
}
errors.append(error)
continue
if "maxLength" in extra_info and len(val) > extra_info["maxLength"]:
error = {
"type": "value_longer_than_max_length",
"message": "Value length {} longer than max length of {}".format(len(val), extra_info["maxLength"]),
"details": f"{x}",
"extra_info": {
"input_name": x,
"input_config": info,
"received_value": val,
}
}
errors.append(error)
continue
if isinstance(input_type, list) or input_type == io.Combo.io_type:
if input_type == io.Combo.io_type:
combo_options = extra_info.get("options", [])
@ -1050,7 +1108,7 @@ async def validate_inputs(prompt_id, prompt, item, validated, visiting=None):
continue
if len(validate_function_inputs) > 0 or validate_has_kwargs:
input_data_all, _, v3_data = get_input_data(inputs, obj_class, unique_id)
input_data_all, _, v3_data, _ = get_input_data(inputs, obj_class, unique_id)
input_filtered = {}
for x in input_data_all:
if x in validate_function_inputs or validate_has_kwargs:

View File

@ -74,6 +74,8 @@ tags:
description: Cloud workflow management and versioning (cloud-only)
- name: task
description: Background task management (cloud-only)
- name: runtime-only
description: Operations served exclusively by the cloud runtime with no local equivalent
paths:
# ---------------------------------------------------------------------------
@ -2573,35 +2575,38 @@ paths:
# ---------------------------------------------------------------------------
/api/experiment/nodes:
get:
operationId: listCloudNodes
tags: [node]
summary: List installed custom nodes
description: "[cloud-only] Returns the list of custom node packages installed in the cloud runtime."
operationId: getNodeInfoSchema
tags: [runtime-only]
summary: Get pre-rendered node info schema
description: "[cloud-only] Returns the static ComfyUI object_info schema, identical for every caller, rendered once at startup with empty model/user-file context. Served by a raw HTTP handler that writes pre-rendered bytes with ETag + Cache-Control validators for RFC 7232 conditional GETs."
x-runtime: [cloud]
parameters:
- name: limit
in: query
- name: If-None-Match
in: header
required: false
schema:
type: integer
description: Maximum number of results
- name: offset
in: query
schema:
type: integer
description: Pagination offset
type: string
description: Entity tag previously returned by this endpoint. When present and matching, the server returns 304 Not Modified.
responses:
"200":
description: Custom node list
description: Node info schema
headers:
ETag:
schema:
type: string
description: Entity tag for conditional request validation
Cache-Control:
schema:
type: string
description: Cache directives for the response
content:
application/json:
schema:
$ref: "#/components/schemas/CloudNodeList"
"401":
description: Unauthorized
content:
application/json:
schema:
$ref: "#/components/schemas/CloudError"
type: object
additionalProperties:
$ref: "#/components/schemas/NodeInfo"
"304":
description: Not Modified — returned when the client sends a matching If-None-Match header
post:
operationId: installCloudNode
tags: [node]
@ -2651,10 +2656,10 @@ paths:
/api/experiment/nodes/{id}:
get:
operationId: getCloudNode
tags: [node]
summary: Get details of an installed custom node
description: "[cloud-only] Returns details about a specific installed custom node package."
operationId: getNodeByID
tags: [runtime-only]
summary: Get a single node definition by ID
description: "[cloud-only] Returns one node's definition from the pre-indexed object_info schema. Served by a raw HTTP handler that writes pre-rendered bytes with ETag + Cache-Control validators for RFC 7232 conditional GETs."
x-runtime: [cloud]
parameters:
- name: id
@ -2662,26 +2667,33 @@ paths:
required: true
schema:
type: string
description: Custom node package ID
description: Node class identifier
- name: If-None-Match
in: header
required: false
schema:
type: string
description: Entity tag previously returned by this endpoint. When present and matching, the server returns 304 Not Modified.
responses:
"200":
description: Node detail
description: Single node definition
headers:
ETag:
schema:
type: string
description: Entity tag for conditional request validation
Cache-Control:
schema:
type: string
description: Cache directives for the response
content:
application/json:
schema:
$ref: "#/components/schemas/CloudNode"
"401":
description: Unauthorized
content:
application/json:
schema:
$ref: "#/components/schemas/CloudError"
$ref: "#/components/schemas/NodeInfo"
"304":
description: Not Modified — returned when the client sends a matching If-None-Match header
"404":
description: Not found
content:
application/json:
schema:
$ref: "#/components/schemas/CloudError"
description: Node not found
delete:
operationId: uninstallCloudNode
tags: [node]
@ -7100,22 +7112,6 @@ components:
enabled:
type: boolean
CloudNodeList:
type: object
x-runtime: [cloud]
description: "[cloud-only] Paginated list of installed custom node packages."
required:
- nodes
properties:
nodes:
type: array
items:
$ref: "#/components/schemas/CloudNode"
total:
type: integer
has_more:
type: boolean
HubLabel:
type: object
x-runtime: [cloud]

View File

@ -1,4 +1,4 @@
comfyui-frontend-package==1.43.17
comfyui-frontend-package==1.43.18
comfyui-workflow-templates==0.9.72
comfyui-embedded-docs==0.4.4
torch

View File

@ -1011,3 +1011,49 @@ class TestExecution:
"""Test getting a non-existent job returns 404"""
job = client.get_job("nonexistent-job-id")
assert job is None, "Non-existent job should return None"
@pytest.mark.parametrize("text, expect_error", [
("hello", False), # 5 chars, within [3, 10]
("abc", False), # 3 chars, exact min boundary
("abcdefghij", False), # 10 chars, exact max boundary
("ab", True), # 2 chars, below min
("abcdefghijk", True), # 11 chars, above max
("", True), # 0 chars, below min
])
def test_string_length_widget_validation(self, text, expect_error, client: ComfyClient, builder: GraphBuilder):
"""Test minLength/maxLength validation for direct widget values (validate_inputs path)."""
g = builder
node = g.node("StubStringWithLength", text=text)
g.node("SaveImage", images=node.out(0))
if expect_error:
with pytest.raises(urllib.error.HTTPError) as exc_info:
client.run(g)
assert exc_info.value.code == 400
else:
client.run(g)
@pytest.mark.parametrize("text, expect_error", [
("hello", False), # 5 chars, within [3, 10]
("abc", False), # 3 chars, exact min boundary
("abcdefghij", False), # 10 chars, exact max boundary
("ab", True), # 2 chars, below min
("abcdefghijk", True), # 11 chars, above max
("", True), # 0 chars, below min
])
def test_string_length_linked_validation(self, text, expect_error, client: ComfyClient, builder: GraphBuilder):
"""Test minLength/maxLength validation for linked inputs (validate_resolved_inputs path)."""
g = builder
str_node = g.node("StubStringOutput", value=text)
node = g.node("StubStringWithLength", text=str_node.out(0))
g.node("SaveImage", images=node.out(0))
if expect_error:
try:
client.run(g)
assert False, "Should have raised an error"
except Exception as e:
assert 'prompt_id' in e.args[0], f"Did not get proper error message: {e}"
else:
client.run(g)

View File

@ -113,12 +113,48 @@ class StubFloat:
def stub_float(self, value):
return (value,)
class StubStringOutput:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"value": ("STRING", {"default": ""}),
},
}
RETURN_TYPES = ("STRING",)
FUNCTION = "stub_string"
CATEGORY = "Testing/Stub Nodes"
def stub_string(self, value):
return (value,)
class StubStringWithLength:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"text": ("STRING", {"default": "hello", "minLength": 3, "maxLength": 10}),
},
}
RETURN_TYPES = ("IMAGE",)
FUNCTION = "stub_string_with_length"
CATEGORY = "Testing/Stub Nodes"
def stub_string_with_length(self, text):
return (torch.zeros(1, 64, 64, 3),)
TEST_STUB_NODE_CLASS_MAPPINGS = {
"StubImage": StubImage,
"StubConstantImage": StubConstantImage,
"StubMask": StubMask,
"StubInt": StubInt,
"StubFloat": StubFloat,
"StubStringOutput": StubStringOutput,
"StubStringWithLength": StubStringWithLength,
}
TEST_STUB_NODE_DISPLAY_NAME_MAPPINGS = {
"StubImage": "Stub Image",
@ -126,4 +162,6 @@ TEST_STUB_NODE_DISPLAY_NAME_MAPPINGS = {
"StubMask": "Stub Mask",
"StubInt": "Stub Int",
"StubFloat": "Stub Float",
"StubStringOutput": "Stub String Output",
"StubStringWithLength": "Stub String With Length",
}