Compare commits

..

15 Commits

Author SHA1 Message Date
Daxiong (Lin)
3e9654a0e6
Merge branch 'master' into blueprints/subgraph-description 2026-05-09 11:18:15 +08:00
linmoumou
22fcf61647 Apply CodeRabbit review suggestions
- Color Adjustment: include vibrance in description
- Image Blur: expand to Gaussian/Box/Radial modes
- Flux.2 Klein 4B: narrow to image edit only (no T2I)
- NetaYume Lumina: correct model base (Neta Lumina, not Lumina-Next)
2026-05-09 11:16:06 +08:00
linmoumou
01cda06973 fix: revert Color Adjustment.json to preserve original GLSL shader content
Only adds the 'description' field without modifying the shader code
(which contained Unicode escape \\u2192 that should be preserved).
2026-05-09 10:45:54 +08:00
linmoumou
563ac16a35 Apply review suggestions from alexisrolland
- Rename 'Image to Model (Hunyuan3d 2.1)' -> 'Image to 3D Model (Hunyuan3d 2.1)'
- Rename 'Image Upscale(Z-image-Turbo)' -> 'Image Upscale (Z-image-Turbo)'
- Rename 'Video Inpaint(Wan2.1 VACE)' -> 'Video Inpaint (Wan 2.1 VACE)'
- Use 'Black Forest Labs' branding in Flux descriptions
- Use 'Google's Gemini' with possessive in captioning nodes
- Normalize 'Wan 2.2' and 'Wan 2.1' spacing in descriptions
2026-05-09 10:40:42 +08:00
linmoumou
0e5b8c8534 Preserve UTF-8 encoding in JSON files (ensure_ascii=False) 2026-05-09 10:40:42 +08:00
linmoumou
092a58a77a Remove 'local-' prefix from subgraph names 2026-05-09 10:40:42 +08:00
linmoumou
0e4d355015 Fix Canny to Video (LTX 2.0) description 2026-05-09 10:40:42 +08:00
linmoumou
f6e18c9d73 Strip marketing fluff and license info from descriptions 2026-05-09 10:40:42 +08:00
linmoumou
567104c042 Refine blueprint descriptions with researched model specs from docs
Updates subgraph descriptions across all 51 blueprints with accurate
model details drawn from ComfyUI docs, including:
- Flux.1 Dev: 12B open-weights, Pro-level quality
- Flux.2 Klein 4B: fastest Flux, distilled architecture
- Qwen-Image: 20B MMDiT, multilingual text rendering
- Z-Image-Turbo: distilled 6B DiT, sub-second inference
- LTX-2/2.3: 19B DiT audio-video foundation model
- Wan2.2: open-source, 14B/1.3B variants
- ACE-Step 1.5: ~1s full-song generation
- GPU shader nodes consistently labeled as fragment shaders
2026-05-09 10:40:42 +08:00
linmoumou
35fccccd08 Update blueprint descriptions with researched model info 2026-05-09 10:40:42 +08:00
linmoumou
8bf520295e Add description field to all blueprint subgraphs
Sets the 'description' field on every subgraph blueprint node,
which will show on the node preview and tooltip. Covers all 51
blueprint files under blueprints/.
2026-05-09 10:40:42 +08:00
Matt Miller
4e823431cc
Add cloud-runtime experiment node-schema endpoints to spec (#13806)
* Add cloud-runtime experiment node-schema endpoints to spec

Replace the GET operations at /api/experiment/nodes and
/api/experiment/nodes/{id} with getNodeInfoSchema and getNodeByID —
the optimized, ETag-tagged object_info schema endpoints the cloud
frontend depends on for the workflow editor.

Each operation is tagged x-runtime: [cloud] and uses the runtime-only
tag for cloud-side codegen exclusion. Response headers document the
ETag and Cache-Control validators; 304 Not Modified is declared for
RFC 7232 conditional GETs.

Remove the now-unused CloudNodeList schema to keep Spectral clean.

Co-authored-by: Matt Miller <MillerMedia@users.noreply.github.com>

* spec: document If-None-Match header on conditional GET endpoints

Both `getNodeInfoSchema` and `getNodeByID` advertise `ETag` response
headers and a `304 Not Modified` response, but the spec didn't declare
the `If-None-Match` request header that triggers conditional validation.
Adding it as an optional header parameter on both ops so client codegen
exposes the conditional-GET pattern.
2026-05-08 19:14:23 -07:00
comfyanonymous
66669b2ded
I don't think there was any because nobody complained. (#13807)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
2026-05-08 17:32:14 -07:00
Alexander Piskun
65045730a6
[Partner Nodes] additionally use Baidu server to detect the accessibility of internet (#13803)
Signed-off-by: bigcat88 <bigcat88@icloud.com>
2026-05-08 13:11:52 -07:00
Matt Miller
87878f354f
Add cloud-runtime FE-facing operations to spec (#13734)
* Add cloud-runtime FE-facing operations to openapi.yaml

Add ~67 cloud-runtime FE-facing path operations to the core OpenAPI spec,
each tagged with x-runtime: [cloud] at the operation level. These operations
are served by the cloud runtime; the local runtime returns 404 for all of
these paths.

Domain groups added:
- Jobs / prompts: /api/job/*, /api/jobs/*/cancel, /api/prompt/*, etc.
- History v2: /api/history_v2, /api/history_v2/{prompt_id}
- Cloud logs: /api/logs
- Asset extensions: /api/assets/download, export, import, etc.
- Custom nodes: /api/experiment/nodes (cloud install/uninstall)
- Hub: /api/hub/profiles, /api/hub/workflows, /api/hub/labels, etc.
- Workflows: /api/workflows CRUD, versioning, fork, publish
- Auth/session: /api/auth/session, /api/auth/token, /.well-known/jwks.json
- Billing: /api/billing/balance, plans, subscribe, topup, etc.
- Workspace: /api/workspace/*, /api/workspaces/*
- User/settings/misc: /api/user, /api/secrets, /api/feedback, etc.

Also adds corresponding cloud-only component schemas (CloudJob, CloudWorkflow,
BillingPlan, Workspace, HubProfile, AuthSession, etc.), all tagged with
x-runtime: [cloud].

Spectral lint passes under the existing ruleset with zero new warnings.

* Add job_id field to Asset schema and deprecate prompt_id (#13736)

- Add job_id as a nullable UUID field to the Asset schema
- Mark prompt_id as deprecated with note pointing to job_id
- No x-runtime tag needed as both runtimes populate the field

* Add hash field to Asset schemas and deprecate asset_hash (#13738)

- Add 'hash' as a nullable string field to Asset and AssetUpdated schemas
- Mark 'asset_hash' as deprecated with a note pointing to 'hash'
- AssetCreated inherits 'hash' via allOf from Asset
- Spectral lint clean (no new warnings)

* Fix method drift on cloud-runtime endpoints

Three PUT operations were added that should be PATCH (cloud serves
PATCH for partial updates):

- /api/workflows/{workflow_id}
- /api/workspaces/{id}
- /api/workspace/members/{userId}

Two POST operations were added that should be GET (cloud serves GET
with query params):

- /api/assets/remote-metadata (url moves to query param)
- /api/files/mask-layers (response shape replaced — operation queries
  related mask layer filenames, not file uploads)

* Add missing cloud-runtime operations and schemas

PR review surfaced operations the cloud runtime serves that weren't
covered by the initial spec push, plus one path family missed entirely.

New methods on existing paths:

- /api/auth/session: add POST (create session cookie) and DELETE (logout)
- /api/secrets/{id}: add GET (read metadata) and PATCH (update)
- /api/hub/profiles: add POST (create profile)
- /api/hub/workflows: add POST (publish to hub)
- /api/hub/workflows/{share_id}: add DELETE (unpublish)
- /api/workspaces/{id}: add DELETE (soft-delete workspace)
- /api/workspace/members/{user_id}/api-keys: add DELETE (bulk revoke)
- /api/workflows/{workflow_id}/versions: add POST (create new version)
- /api/userdata/{file}/publish: add GET (read publish info)

New path family:

- /api/tasks (GET list) and /api/tasks/{task_id} (GET detail) for the
  background task framework

New component schemas (all tagged x-runtime: [cloud]):

CreateSessionResponse, DeleteSessionResponse, UpdateSecretRequest,
BulkRevokeAPIKeysResponse, CreateHubProfileRequest, PublishHubWorkflowRequest,
HubWorkflowDetail, AssetInfo, CreateWorkflowVersionRequest,
WorkflowVersionResponse, WorkflowPublishInfo, TaskEntry, TaskResponse,
TasksListResponse. Existing SecretMeta extended with provider and
last_used_at fields the cloud runtime actually returns.

New tag: task. Spectral lint passes with zero errors.

* Add job_id and prompt_id to AssetUpdated schema

Mirrors the Asset schema's deprecation pattern: prompt_id is marked
deprecated with a description pointing to job_id; job_id is the new
preferred field. PUT /api/assets/{id} responses can now carry both fields
consistent with the other Asset-returning endpoints.

* feat: add width and height fields to Asset schema (#13745)

Add nullable integer fields 'width' and 'height' to the Asset schema
in openapi.yaml. These expose original image dimensions in pixels for
clients that need pre-thumbnail size info. Both fields are null for
non-image assets or assets ingested before dimension extraction.

Co-authored-by: Matt Miller <MillerMedia@users.noreply.github.com>

* Remove /api/job/{job_id} and /api/job/{job_id}/outputs

These two paths are not actually served by the cloud runtime — they
return 404 with a redirect message pointing callers to the canonical
`/api/jobs/{job_id}` (plural). Declaring them with `x-runtime: [cloud]`
and a 200 response schema is incorrect.

`/api/job/{job_id}/status` stays — it is a real cloud-served endpoint.

Also drops the now-orphaned `CloudJob` and `CloudJobOutputs` component
schemas. `CloudJobStatus` is retained.
2026-05-08 12:39:16 -07:00
16 changed files with 4762 additions and 34 deletions

View File

@ -256,7 +256,7 @@
"Node name for S&R": "GLSLShader"
},
"widgets_values": [
"#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform float u_float0; // temperature (-100 to 100)\nuniform float u_float1; // tint (-100 to 100)\nuniform float u_float2; // vibrance (-100 to 100)\nuniform float u_float3; // saturation (-100 to 100)\n\nin vec2 v_texCoord;\nout vec4 fragColor;\n\nconst float INPUT_SCALE = 0.01;\nconst float TEMP_TINT_PRIMARY = 0.3;\nconst float TEMP_TINT_SECONDARY = 0.15;\nconst float VIBRANCE_BOOST = 2.0;\nconst float SATURATION_BOOST = 2.0;\nconst float SKIN_PROTECTION = 0.5;\nconst float EPSILON = 0.001;\nconst vec3 LUMA_WEIGHTS = vec3(0.299, 0.587, 0.114);\n\nvoid main() {\n vec4 tex = texture(u_image0, v_texCoord);\n vec3 color = tex.rgb;\n \n // Scale inputs: -100/100 -1/1\n float temperature = u_float0 * INPUT_SCALE;\n float tint = u_float1 * INPUT_SCALE;\n float vibrance = u_float2 * INPUT_SCALE;\n float saturation = u_float3 * INPUT_SCALE;\n \n // Temperature (warm/cool): positive = warm, negative = cool\n color.r += temperature * TEMP_TINT_PRIMARY;\n color.b -= temperature * TEMP_TINT_PRIMARY;\n \n // Tint (green/magenta): positive = green, negative = magenta\n color.g += tint * TEMP_TINT_PRIMARY;\n color.r -= tint * TEMP_TINT_SECONDARY;\n color.b -= tint * TEMP_TINT_SECONDARY;\n \n // Single clamp after temperature/tint\n color = clamp(color, 0.0, 1.0);\n \n // Vibrance with skin protection\n if (vibrance != 0.0) {\n float maxC = max(color.r, max(color.g, color.b));\n float minC = min(color.r, min(color.g, color.b));\n float sat = maxC - minC;\n float gray = dot(color, LUMA_WEIGHTS);\n \n if (vibrance < 0.0) {\n // Desaturate: -100 gray\n color = mix(vec3(gray), color, 1.0 + vibrance);\n } else {\n // Boost less saturated colors more\n float vibranceAmt = vibrance * (1.0 - sat);\n \n // Branchless skin tone protection\n float isWarmTone = step(color.b, color.g) * step(color.g, color.r);\n float warmth = (color.r - color.b) / max(maxC, EPSILON);\n float skinTone = isWarmTone * warmth * sat * (1.0 - sat);\n vibranceAmt *= (1.0 - skinTone * SKIN_PROTECTION);\n \n color = mix(vec3(gray), color, 1.0 + vibranceAmt * VIBRANCE_BOOST);\n }\n }\n \n // Saturation\n if (saturation != 0.0) {\n float gray = dot(color, LUMA_WEIGHTS);\n float satMix = saturation < 0.0\n ? 1.0 + saturation // -100 → gray\n : 1.0 + saturation * SATURATION_BOOST; // +100 → 3x boost\n color = mix(vec3(gray), color, satMix);\n }\n \n fragColor = vec4(clamp(color, 0.0, 1.0), tex.a);\n}",
"#version 300 es\nprecision highp float;\n\nuniform sampler2D u_image0;\nuniform float u_float0; // temperature (-100 to 100)\nuniform float u_float1; // tint (-100 to 100)\nuniform float u_float2; // vibrance (-100 to 100)\nuniform float u_float3; // saturation (-100 to 100)\n\nin vec2 v_texCoord;\nout vec4 fragColor;\n\nconst float INPUT_SCALE = 0.01;\nconst float TEMP_TINT_PRIMARY = 0.3;\nconst float TEMP_TINT_SECONDARY = 0.15;\nconst float VIBRANCE_BOOST = 2.0;\nconst float SATURATION_BOOST = 2.0;\nconst float SKIN_PROTECTION = 0.5;\nconst float EPSILON = 0.001;\nconst vec3 LUMA_WEIGHTS = vec3(0.299, 0.587, 0.114);\n\nvoid main() {\n vec4 tex = texture(u_image0, v_texCoord);\n vec3 color = tex.rgb;\n \n // Scale inputs: -100/100 \u2192 -1/1\n float temperature = u_float0 * INPUT_SCALE;\n float tint = u_float1 * INPUT_SCALE;\n float vibrance = u_float2 * INPUT_SCALE;\n float saturation = u_float3 * INPUT_SCALE;\n \n // Temperature (warm/cool): positive = warm, negative = cool\n color.r += temperature * TEMP_TINT_PRIMARY;\n color.b -= temperature * TEMP_TINT_PRIMARY;\n \n // Tint (green/magenta): positive = green, negative = magenta\n color.g += tint * TEMP_TINT_PRIMARY;\n color.r -= tint * TEMP_TINT_SECONDARY;\n color.b -= tint * TEMP_TINT_SECONDARY;\n \n // Single clamp after temperature/tint\n color = clamp(color, 0.0, 1.0);\n \n // Vibrance with skin protection\n if (vibrance != 0.0) {\n float maxC = max(color.r, max(color.g, color.b));\n float minC = min(color.r, min(color.g, color.b));\n float sat = maxC - minC;\n float gray = dot(color, LUMA_WEIGHTS);\n \n if (vibrance < 0.0) {\n // Desaturate: -100 \u2192 gray\n color = mix(vec3(gray), color, 1.0 + vibrance);\n } else {\n // Boost less saturated colors more\n float vibranceAmt = vibrance * (1.0 - sat);\n \n // Branchless skin tone protection\n float isWarmTone = step(color.b, color.g) * step(color.g, color.r);\n float warmth = (color.r - color.b) / max(maxC, EPSILON);\n float skinTone = isWarmTone * warmth * sat * (1.0 - sat);\n vibranceAmt *= (1.0 - skinTone * SKIN_PROTECTION);\n \n color = mix(vec3(gray), color, 1.0 + vibranceAmt * VIBRANCE_BOOST);\n }\n }\n \n // Saturation\n if (saturation != 0.0) {\n float gray = dot(color, LUMA_WEIGHTS);\n float satMix = saturation < 0.0\n ? 1.0 + saturation // -100 \u2192 gray\n : 1.0 + saturation * SATURATION_BOOST; // +100 \u2192 3x boost\n color = mix(vec3(gray), color, satMix);\n }\n \n fragColor = vec4(clamp(color, 0.0, 1.0), tex.a);\n}",
"from_input"
]
},
@ -597,8 +597,8 @@
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Color adjust",
"description": "Adjusts saturation, temperature, and tint using a real-time GPU fragment shader."
"description": "Adjusts saturation, temperature, tint, and vibrance using a real-time GPU fragment shader."
}
]
}
}
}

View File

@ -375,8 +375,8 @@
"workflowRendererVersion": "LG"
},
"category": "Image Tools/Blur",
"description": "Applies Gaussian blur to soften an image or simulate depth-of-field effects."
"description": "Applies Gaussian, Box, or Radial blur to soften images and create stylized depth or motion effects."
}
]
}
}
}

View File

@ -311,8 +311,8 @@
"workflowRendererVersion": "LG"
},
"category": "Text generation/Image Captioning",
"description": "Generates descriptive captions for images using Google Gemini's multimodal LLM."
"description": "Generates descriptive captions for images using Google's Gemini multimodal LLM."
}
]
}
}
}

View File

@ -1473,7 +1473,7 @@
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Edit image",
"description": "Edits images via text instructions using FLUX.2 [klein] 4B, supporting both T2I and image editing."
"description": "Edits an input image via text instructions using FLUX.2 [klein] 4B."
},
{
"id": "6007e698-2ebd-4917-84d8-299b35d7b7ab",
@ -1839,4 +1839,4 @@
}
},
"version": 0.4
}
}

View File

@ -1189,7 +1189,7 @@
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Inpaint image",
"description": "Inpaints masked image regions using Flux.1 fill [dev], BFL's inpainting/outpainting model."
"description": "Inpaints masked image regions using Flux.1 fill [dev], Black Forest Labs' inpainting/outpainting model."
}
]
},
@ -1203,4 +1203,4 @@
},
"ue_links": []
}
}
}

View File

@ -141,7 +141,7 @@
},
"revision": 0,
"config": {},
"name": "Image Upscale(Z-image-Turbo)",
"name": "Image Upscale (Z-image-Turbo)",
"inputNode": {
"id": -10,
"bounding": [
@ -1312,4 +1312,4 @@
"workflowRendererVersion": "LG"
},
"version": 0.4
}
}

View File

@ -72,7 +72,7 @@
},
"revision": 0,
"config": {},
"name": "Image to Model (Hunyuan3d 2.1)",
"name": "Image to 3D Model (Hunyuan3d 2.1)",
"inputNode": {
"id": -10,
"bounding": [
@ -782,4 +782,4 @@
"workflowRendererVersion": "LG"
},
"version": 0.4
}
}

View File

@ -2028,7 +2028,7 @@
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Image to video",
"description": "Generates video from an image and text prompt using Wan2.2, supporting T2V and I2V."
"description": "Generates video from an image and text prompt using Wan 2.2, supporting T2V and I2V."
}
]
},
@ -2050,4 +2050,4 @@
"ue_links": []
},
"version": 0.4
}
}

View File

@ -1030,7 +1030,7 @@
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Flux.1 [dev], BFL's 12B diffusion model."
"description": "Generates images from text prompts using Flux.1 [dev], Black Forest Labs' 12B diffusion model."
}
]
},
@ -1044,4 +1044,4 @@
},
"ue_links": []
}
}
}

View File

@ -1024,7 +1024,7 @@
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using Flux.1 Krea Dev, a BFL × Krea collaboration variant."
"description": "Generates images from text prompts using Flux.1 Krea Dev, a Black Forest Labs × Krea collaboration variant."
}
]
},
@ -1038,4 +1038,4 @@
},
"ue_links": []
}
}
}

View File

@ -1105,7 +1105,7 @@
"workflowRendererVersion": "LG"
},
"category": "Image generation and editing/Text to image",
"description": "Generates images from text prompts using NetaYume Lumina, a Lumina-Next variant fine-tuned for anime-style and illustration generation."
"description": "Generates images from text prompts using NetaYume Lumina, fine-tuned from Neta Lumina for anime-style and illustration generation."
},
{
"id": "a07fdf06-1bda-4dac-bdbd-63ee8ebca1c9",
@ -1467,4 +1467,4 @@
"extra": {
"ue_links": []
}
}
}

View File

@ -308,8 +308,8 @@
"workflowRendererVersion": "LG"
},
"category": "Text generation/Video Captioning",
"description": "Generates descriptive captions for video input using Google Gemini's multimodal LLM."
"description": "Generates descriptive captions for video input using Google's Gemini multimodal LLM."
}
]
}
}
}

View File

@ -165,7 +165,7 @@
},
"revision": 0,
"config": {},
"name": "Video Inpaint(Wan2.1 VACE)",
"name": "Video Inpaint (Wan 2.1 VACE)",
"inputNode": {
"id": -10,
"bounding": [
@ -2369,7 +2369,7 @@
"workflowRendererVersion": "LG"
},
"category": "Video generation and editing/Inpaint video",
"description": "Inpaints masked regions in video frames using Wan2.1 VACE."
"description": "Inpaints masked regions in video frames using Wan 2.1 VACE."
}
]
},
@ -2385,4 +2385,4 @@
}
},
"version": 0.4
}
}

View File

@ -1390,7 +1390,7 @@ def convert_old_quants(state_dict, model_prefix="", metadata={}):
k_out = "{}.weight_scale".format(layer)
if layer is not None:
layer_conf = {"format": "float8_e4m3fn"} # TODO: check if anyone did some non e4m3fn scaled checkpoints
layer_conf = {"format": "float8_e4m3fn"}
if full_precision_matrix_mult:
layer_conf["full_precision_matrix_mult"] = full_precision_matrix_mult
layers[layer] = layer_conf

View File

@ -488,10 +488,30 @@ async def _diagnose_connectivity() -> dict[str, bool]:
"api_accessible": False,
}
timeout = aiohttp.ClientTimeout(total=5.0)
# Probe Google and Baidu in parallel: Google is blocked by the GFW in mainland China, so a Baidu probe is required
# to correctly detect that Chinese users with working internet do have working internet.
internet_probe_urls = ("https://www.google.com", "https://www.baidu.com")
async with aiohttp.ClientSession(timeout=timeout) as session:
with contextlib.suppress(ClientError, OSError):
async with session.get("https://www.google.com") as resp:
results["internet_accessible"] = resp.status < 500
async def _probe(url: str) -> bool:
try:
async with session.get(url) as resp:
return resp.status < 500
except (ClientError, OSError, asyncio.TimeoutError):
return False
probe_tasks = [asyncio.create_task(_probe(u)) for u in internet_probe_urls]
try:
for fut in asyncio.as_completed(probe_tasks):
if await fut:
results["internet_accessible"] = True
break
finally:
for t in probe_tasks:
if not t.done():
t.cancel()
await asyncio.gather(*probe_tasks, return_exceptions=True)
if not results["internet_accessible"]:
return results

File diff suppressed because it is too large Load Diff