Compare commits

...

17 Commits

Author SHA1 Message Date
RandomGitUser321
8d5d8713ca
Merge c8887fc64e into bbe2c13a70 2026-01-30 15:36:21 +08:00
comfyanonymous
bbe2c13a70
Make empty hunyuan latent 1.0 work with the 1.5 model. (#12171)
Some checks are pending
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Python Linting / Run Ruff (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
2026-01-29 23:52:22 -05:00
Christian Byrne
3aace5c8dc
fix: count non-dict items in outputs_count (#12166)
Move count increment before isinstance(item, dict) check so that
non-dict output items (like text strings from PreviewAny node)
are included in outputs_count.

This aligns OSS Python with Cloud's Go implementation which uses
len(itemsArray) to count ALL items regardless of type.

Amp-Thread-ID: https://ampcode.com/threads/T-019c0bb5-14e0-744f-8808-1e57653f3ae3

Co-authored-by: Amp <amp@ampcode.com>
2026-01-29 17:10:08 -08:00
comfyanonymous
b0d9708974 ComfyUI v0.11.1
Some checks are pending
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
2026-01-29 00:27:23 -05:00
comfyanonymous
c9b633d84f
Add missing spacial downscale ratios. (#12146) 2026-01-28 20:52:51 -05:00
ComfyUI Wiki
1711020904
chore: update workflow templates to v0.8.27 (#12141)
Some checks failed
Build package / Build Test (3.10) (push) Has been cancelled
Build package / Build Test (3.11) (push) Has been cancelled
Build package / Build Test (3.12) (push) Has been cancelled
Build package / Build Test (3.13) (push) Has been cancelled
Build package / Build Test (3.14) (push) Has been cancelled
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
2026-01-28 12:48:02 -05:00
Dr.Lt.Data
d9b8567547
bump manager version to 4.1b1 (#12140) 2026-01-28 12:47:37 -05:00
Alexander Piskun
6c5f906bf2
feat(api-nodes): add Grok Imagine nodes (#12136) 2026-01-28 12:46:57 -05:00
comfyanonymous
4f5bd39b1c
Update Python 3.14 compatibility notes in README (#12127)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
2026-01-27 19:58:48 -05:00
guill
dcff27fe3f
Add support for dev-only nodes. (#12106)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
When a node is declared as dev-only, it doesn't show in the default UI
unless the dev mode is enabled in the settings. The intention is to
allow nodes related to unit testing to be included in ComfyUI
distributions without confusing the average user.
2026-01-27 13:03:29 -08:00
RandomGitUser321
c8887fc64e
Update the rocm.nightlies.amd.com link for the 7000 series
As far as I know, they stopped using the gfx110X-dgpu directory and instead, use the gfx110X-all directory now. You can verify this by looking at the builds dates +rocmX.X.Xa2025___
2025-12-26 19:48:02 -05:00
RandomGitUser321
0323d8746d
Merge branch 'comfyanonymous:master' into master 2025-12-26 19:42:11 -05:00
RandomGitUser321
0a320efbae
Merge branch 'comfyanonymous:master' into master 2025-08-21 11:15:06 -04:00
RandomGitUser321
778690e7b2
Merge branch 'comfyanonymous:master' into master 2025-08-15 11:04:35 -04:00
RandomGitUser321
7c595586dd
grammar 2024-09-22 08:28:11 -04:00
RandomGitUser321
80cd3fb13c
directions on launching 2024-09-22 08:05:12 -04:00
RandomGitUser321
826ad087a0
gradio websocket app example 2024-09-22 07:40:08 -04:00
14 changed files with 826 additions and 8 deletions

View File

@ -208,7 +208,7 @@ comfy install
## Manual Install (Windows, Linux)
Python 3.14 works but you may encounter issues with the torch compile node. The free threaded variant is still missing some dependencies.
Python 3.14 works but some custom nodes may have issues. The free threaded variant works but some dependencies will enable the GIL so it's not fully supported.
Python 3.13 is very well supported. If you have trouble with some custom node dependencies on 3.13 you can try 3.12

View File

@ -236,6 +236,8 @@ class ComfyNodeABC(ABC):
"""Flags a node as experimental, informing users that it may change or not work as expected."""
DEPRECATED: bool
"""Flags a node as deprecated, indicating to users that they should find alternatives to this node."""
DEV_ONLY: bool
"""Flags a node as dev-only, hiding it from search/menus unless dev mode is enabled."""
API_NODE: Optional[bool]
"""Flags a node as an API node. See: https://docs.comfy.org/tutorials/api-nodes/overview."""

View File

@ -81,6 +81,7 @@ class SD_X4(LatentFormat):
class SC_Prior(LatentFormat):
latent_channels = 16
spacial_downscale_ratio = 42
def __init__(self):
self.scale_factor = 1.0
self.latent_rgb_factors = [
@ -103,6 +104,7 @@ class SC_Prior(LatentFormat):
]
class SC_B(LatentFormat):
spacial_downscale_ratio = 4
def __init__(self):
self.scale_factor = 1.0 / 0.43
self.latent_rgb_factors = [
@ -274,6 +276,7 @@ class Mochi(LatentFormat):
class LTXV(LatentFormat):
latent_channels = 128
latent_dimensions = 3
spacial_downscale_ratio = 32
def __init__(self):
self.latent_rgb_factors = [
@ -517,6 +520,7 @@ class Wan21(LatentFormat):
class Wan22(Wan21):
latent_channels = 48
latent_dimensions = 3
spacial_downscale_ratio = 16
latent_rgb_factors = [
[ 0.0119, 0.0103, 0.0046],

View File

@ -1247,6 +1247,7 @@ class NodeInfoV1:
output_node: bool=None
deprecated: bool=None
experimental: bool=None
dev_only: bool=None
api_node: bool=None
price_badge: dict | None = None
search_aliases: list[str]=None
@ -1264,6 +1265,7 @@ class NodeInfoV3:
output_node: bool=None
deprecated: bool=None
experimental: bool=None
dev_only: bool=None
api_node: bool=None
price_badge: dict | None = None
@ -1375,6 +1377,8 @@ class Schema:
"""Flags a node as deprecated, indicating to users that they should find alternatives to this node."""
is_experimental: bool=False
"""Flags a node as experimental, informing users that it may change or not work as expected."""
is_dev_only: bool=False
"""Flags a node as dev-only, hiding it from search/menus unless dev mode is enabled."""
is_api_node: bool=False
"""Flags a node as an API node. See: https://docs.comfy.org/tutorials/api-nodes/overview."""
price_badge: PriceBadge | None = None
@ -1485,6 +1489,7 @@ class Schema:
output_node=self.is_output_node,
deprecated=self.is_deprecated,
experimental=self.is_experimental,
dev_only=self.is_dev_only,
api_node=self.is_api_node,
python_module=getattr(cls, "RELATIVE_PYTHON_MODULE", "nodes"),
price_badge=self.price_badge.as_dict(self.inputs) if self.price_badge is not None else None,
@ -1519,6 +1524,7 @@ class Schema:
output_node=self.is_output_node,
deprecated=self.is_deprecated,
experimental=self.is_experimental,
dev_only=self.is_dev_only,
api_node=self.is_api_node,
python_module=getattr(cls, "RELATIVE_PYTHON_MODULE", "nodes"),
price_badge=self.price_badge.as_dict(self.inputs) if self.price_badge is not None else None,
@ -1791,6 +1797,14 @@ class _ComfyNodeBaseInternal(_ComfyNodeInternal):
cls.GET_SCHEMA()
return cls._DEPRECATED
_DEV_ONLY = None
@final
@classproperty
def DEV_ONLY(cls): # noqa
if cls._DEV_ONLY is None:
cls.GET_SCHEMA()
return cls._DEV_ONLY
_API_NODE = None
@final
@classproperty
@ -1893,6 +1907,8 @@ class _ComfyNodeBaseInternal(_ComfyNodeInternal):
cls._EXPERIMENTAL = schema.is_experimental
if cls._DEPRECATED is None:
cls._DEPRECATED = schema.is_deprecated
if cls._DEV_ONLY is None:
cls._DEV_ONLY = schema.is_dev_only
if cls._API_NODE is None:
cls._API_NODE = schema.is_api_node
if cls._OUTPUT_NODE is None:

View File

@ -0,0 +1,67 @@
from pydantic import BaseModel, Field
class ImageGenerationRequest(BaseModel):
model: str = Field(...)
prompt: str = Field(...)
aspect_ratio: str = Field(...)
n: int = Field(...)
seed: int = Field(...)
response_for: str = Field("url")
class InputUrlObject(BaseModel):
url: str = Field(...)
class ImageEditRequest(BaseModel):
model: str = Field(...)
image: InputUrlObject = Field(...)
prompt: str = Field(...)
resolution: str = Field(...)
n: int = Field(...)
seed: int = Field(...)
response_for: str = Field("url")
class VideoGenerationRequest(BaseModel):
model: str = Field(...)
prompt: str = Field(...)
image: InputUrlObject | None = Field(...)
duration: int = Field(...)
aspect_ratio: str | None = Field(...)
resolution: str = Field(...)
seed: int = Field(...)
class VideoEditRequest(BaseModel):
model: str = Field(...)
prompt: str = Field(...)
video: InputUrlObject = Field(...)
seed: int = Field(...)
class ImageResponseObject(BaseModel):
url: str | None = Field(None)
b64_json: str | None = Field(None)
revised_prompt: str | None = Field(None)
class ImageGenerationResponse(BaseModel):
data: list[ImageResponseObject] = Field(...)
class VideoGenerationResponse(BaseModel):
request_id: str = Field(...)
class VideoResponseObject(BaseModel):
url: str = Field(...)
upsampled_prompt: str | None = Field(None)
duration: int = Field(...)
class VideoStatusResponse(BaseModel):
status: str | None = Field(None)
video: VideoResponseObject | None = Field(None)
model: str | None = Field(None)

View File

@ -0,0 +1,417 @@
import torch
from typing_extensions import override
from comfy_api.latest import IO, ComfyExtension, Input
from comfy_api_nodes.apis.grok import (
ImageEditRequest,
ImageGenerationRequest,
ImageGenerationResponse,
InputUrlObject,
VideoEditRequest,
VideoGenerationRequest,
VideoGenerationResponse,
VideoStatusResponse,
)
from comfy_api_nodes.util import (
ApiEndpoint,
download_url_to_image_tensor,
download_url_to_video_output,
get_fs_object_size,
get_number_of_images,
poll_op,
sync_op,
tensor_to_base64_string,
upload_video_to_comfyapi,
validate_string,
validate_video_duration,
)
class GrokImageNode(IO.ComfyNode):
@classmethod
def define_schema(cls):
return IO.Schema(
node_id="GrokImageNode",
display_name="Grok Image",
category="api node/image/Grok",
description="Generate images using Grok based on a text prompt",
inputs=[
IO.Combo.Input("model", options=["grok-imagine-image-beta"]),
IO.String.Input(
"prompt",
multiline=True,
tooltip="The text prompt used to generate the image",
),
IO.Combo.Input(
"aspect_ratio",
options=[
"1:1",
"2:3",
"3:2",
"3:4",
"4:3",
"9:16",
"16:9",
"9:19.5",
"19.5:9",
"9:20",
"20:9",
"1:2",
"2:1",
],
),
IO.Int.Input(
"number_of_images",
default=1,
min=1,
max=10,
step=1,
tooltip="Number of images to generate",
display_mode=IO.NumberDisplay.number,
),
IO.Int.Input(
"seed",
default=0,
min=0,
max=2147483647,
step=1,
display_mode=IO.NumberDisplay.number,
control_after_generate=True,
tooltip="Seed to determine if node should re-run; "
"actual results are nondeterministic regardless of seed.",
),
],
outputs=[
IO.Image.Output(),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
IO.Hidden.unique_id,
],
is_api_node=True,
price_badge=IO.PriceBadge(
depends_on=IO.PriceBadgeDepends(widgets=["number_of_images"]),
expr="""{"type":"usd","usd":0.033 * widgets.number_of_images}""",
),
)
@classmethod
async def execute(
cls,
model: str,
prompt: str,
aspect_ratio: str,
number_of_images: int,
seed: int,
) -> IO.NodeOutput:
validate_string(prompt, strip_whitespace=True, min_length=1)
response = await sync_op(
cls,
ApiEndpoint(path="/proxy/xai/v1/images/generations", method="POST"),
data=ImageGenerationRequest(
model=model,
prompt=prompt,
aspect_ratio=aspect_ratio,
n=number_of_images,
seed=seed,
),
response_model=ImageGenerationResponse,
)
if len(response.data) == 1:
return IO.NodeOutput(await download_url_to_image_tensor(response.data[0].url))
return IO.NodeOutput(
torch.cat(
[await download_url_to_image_tensor(i) for i in [str(d.url) for d in response.data if d.url]],
)
)
class GrokImageEditNode(IO.ComfyNode):
@classmethod
def define_schema(cls):
return IO.Schema(
node_id="GrokImageEditNode",
display_name="Grok Image Edit",
category="api node/image/Grok",
description="Modify an existing image based on a text prompt",
inputs=[
IO.Combo.Input("model", options=["grok-imagine-image-beta"]),
IO.Image.Input("image"),
IO.String.Input(
"prompt",
multiline=True,
tooltip="The text prompt used to generate the image",
),
IO.Combo.Input("resolution", options=["1K"]),
IO.Int.Input(
"number_of_images",
default=1,
min=1,
max=10,
step=1,
tooltip="Number of edited images to generate",
display_mode=IO.NumberDisplay.number,
),
IO.Int.Input(
"seed",
default=0,
min=0,
max=2147483647,
step=1,
display_mode=IO.NumberDisplay.number,
control_after_generate=True,
tooltip="Seed to determine if node should re-run; "
"actual results are nondeterministic regardless of seed.",
),
],
outputs=[
IO.Image.Output(),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
IO.Hidden.unique_id,
],
is_api_node=True,
price_badge=IO.PriceBadge(
depends_on=IO.PriceBadgeDepends(widgets=["number_of_images"]),
expr="""{"type":"usd","usd":0.002 + 0.033 * widgets.number_of_images}""",
),
)
@classmethod
async def execute(
cls,
model: str,
image: Input.Image,
prompt: str,
resolution: str,
number_of_images: int,
seed: int,
) -> IO.NodeOutput:
validate_string(prompt, strip_whitespace=True, min_length=1)
if get_number_of_images(image) != 1:
raise ValueError("Only one input image is supported.")
response = await sync_op(
cls,
ApiEndpoint(path="/proxy/xai/v1/images/edits", method="POST"),
data=ImageEditRequest(
model=model,
image=InputUrlObject(url=f"data:image/png;base64,{tensor_to_base64_string(image)}"),
prompt=prompt,
resolution=resolution.lower(),
n=number_of_images,
seed=seed,
),
response_model=ImageGenerationResponse,
)
if len(response.data) == 1:
return IO.NodeOutput(await download_url_to_image_tensor(response.data[0].url))
return IO.NodeOutput(
torch.cat(
[await download_url_to_image_tensor(i) for i in [str(d.url) for d in response.data if d.url]],
)
)
class GrokVideoNode(IO.ComfyNode):
@classmethod
def define_schema(cls):
return IO.Schema(
node_id="GrokVideoNode",
display_name="Grok Video",
category="api node/video/Grok",
description="Generate video from a prompt or an image",
inputs=[
IO.Combo.Input("model", options=["grok-imagine-video-beta"]),
IO.String.Input(
"prompt",
multiline=True,
tooltip="Text description of the desired video.",
),
IO.Combo.Input(
"resolution",
options=["480p", "720p"],
tooltip="The resolution of the output video.",
),
IO.Combo.Input(
"aspect_ratio",
options=["auto", "16:9", "4:3", "3:2", "1:1", "2:3", "3:4", "9:16"],
tooltip="The aspect ratio of the output video.",
),
IO.Int.Input(
"duration",
default=6,
min=1,
max=15,
step=1,
tooltip="The duration of the output video in seconds.",
display_mode=IO.NumberDisplay.slider,
),
IO.Int.Input(
"seed",
default=0,
min=0,
max=2147483647,
step=1,
display_mode=IO.NumberDisplay.number,
control_after_generate=True,
tooltip="Seed to determine if node should re-run; "
"actual results are nondeterministic regardless of seed.",
),
IO.Image.Input("image", optional=True),
],
outputs=[
IO.Video.Output(),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
IO.Hidden.unique_id,
],
is_api_node=True,
price_badge=IO.PriceBadge(
depends_on=IO.PriceBadgeDepends(widgets=["duration"], inputs=["image"]),
expr="""
(
$base := 0.181 * widgets.duration;
{"type":"usd","usd": inputs.image.connected ? $base + 0.002 : $base}
)
""",
),
)
@classmethod
async def execute(
cls,
model: str,
prompt: str,
resolution: str,
aspect_ratio: str,
duration: int,
seed: int,
image: Input.Image | None = None,
) -> IO.NodeOutput:
image_url = None
if image is not None:
if get_number_of_images(image) != 1:
raise ValueError("Only one input image is supported.")
image_url = InputUrlObject(url=f"data:image/png;base64,{tensor_to_base64_string(image)}")
validate_string(prompt, strip_whitespace=True, min_length=1)
initial_response = await sync_op(
cls,
ApiEndpoint(path="/proxy/xai/v1/videos/generations", method="POST"),
data=VideoGenerationRequest(
model=model,
image=image_url,
prompt=prompt,
resolution=resolution,
duration=duration,
aspect_ratio=None if aspect_ratio == "auto" else aspect_ratio,
seed=seed,
),
response_model=VideoGenerationResponse,
)
response = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/xai/v1/videos/{initial_response.request_id}"),
status_extractor=lambda r: r.status if r.status is not None else "complete",
response_model=VideoStatusResponse,
)
return IO.NodeOutput(await download_url_to_video_output(response.video.url))
class GrokVideoEditNode(IO.ComfyNode):
@classmethod
def define_schema(cls):
return IO.Schema(
node_id="GrokVideoEditNode",
display_name="Grok Video Edit",
category="api node/video/Grok",
description="Edit an existing video based on a text prompt.",
inputs=[
IO.Combo.Input("model", options=["grok-imagine-video-beta"]),
IO.String.Input(
"prompt",
multiline=True,
tooltip="Text description of the desired video.",
),
IO.Video.Input("video", tooltip="Maximum supported duration is 8.7 seconds and 50MB file size."),
IO.Int.Input(
"seed",
default=0,
min=0,
max=2147483647,
step=1,
display_mode=IO.NumberDisplay.number,
control_after_generate=True,
tooltip="Seed to determine if node should re-run; "
"actual results are nondeterministic regardless of seed.",
),
],
outputs=[
IO.Video.Output(),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
IO.Hidden.unique_id,
],
is_api_node=True,
price_badge=IO.PriceBadge(
expr="""{"type":"usd","usd": 0.191, "format": {"suffix": "/sec", "approximate": true}}""",
),
)
@classmethod
async def execute(
cls,
model: str,
prompt: str,
video: Input.Video,
seed: int,
) -> IO.NodeOutput:
validate_string(prompt, strip_whitespace=True, min_length=1)
validate_video_duration(video, min_duration=1, max_duration=8.7)
video_stream = video.get_stream_source()
video_size = get_fs_object_size(video_stream)
if video_size > 50 * 1024 * 1024:
raise ValueError(f"Video size ({video_size / 1024 / 1024:.1f}MB) exceeds 50MB limit.")
initial_response = await sync_op(
cls,
ApiEndpoint(path="/proxy/xai/v1/videos/edits", method="POST"),
data=VideoEditRequest(
model=model,
video=InputUrlObject(url=await upload_video_to_comfyapi(cls, video)),
prompt=prompt,
seed=seed,
),
response_model=VideoGenerationResponse,
)
response = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/xai/v1/videos/{initial_response.request_id}"),
status_extractor=lambda r: r.status if r.status is not None else "complete",
response_model=VideoStatusResponse,
)
return IO.NodeOutput(await download_url_to_video_output(response.video.url))
class GrokExtension(ComfyExtension):
@override
async def get_node_list(self) -> list[type[IO.ComfyNode]]:
return [
GrokImageNode,
GrokImageEditNode,
GrokVideoNode,
GrokVideoEditNode,
]
async def comfy_entrypoint() -> GrokExtension:
return GrokExtension()

View File

@ -171,9 +171,10 @@ def get_outputs_summary(outputs: dict) -> tuple[int, Optional[dict]]:
continue
for item in items:
count += 1
if not isinstance(item, dict):
continue
count += 1
if preview_output is None and is_previewable(media_type, item):
enriched = {

View File

@ -56,7 +56,7 @@ class EmptyHunyuanLatentVideo(io.ComfyNode):
@classmethod
def execute(cls, width, height, length, batch_size=1) -> io.NodeOutput:
latent = torch.zeros([batch_size, 16, ((length - 1) // 4) + 1, height // 8, width // 8], device=comfy.model_management.intermediate_device())
return io.NodeOutput({"samples":latent})
return io.NodeOutput({"samples": latent, "downscale_ratio_spacial": 8})
generate = execute # TODO: remove
@ -73,7 +73,7 @@ class EmptyHunyuanVideo15Latent(EmptyHunyuanLatentVideo):
def execute(cls, width, height, length, batch_size=1) -> io.NodeOutput:
# Using scale factor of 16 instead of 8
latent = torch.zeros([batch_size, 32, ((length - 1) // 4) + 1, height // 16, width // 16], device=comfy.model_management.intermediate_device())
return io.NodeOutput({"samples": latent})
return io.NodeOutput({"samples": latent, "downscale_ratio_spacial": 16})
class HunyuanVideo15ImageToVideo(io.ComfyNode):

View File

@ -1,3 +1,3 @@
# This file is automatically generated by the build process when version is
# updated in pyproject.toml.
__version__ = "0.11.0"
__version__ = "0.11.1"

View File

@ -1 +1 @@
comfyui_manager==4.0.5
comfyui_manager==4.1b1

View File

@ -1,6 +1,6 @@
[project]
name = "ComfyUI"
version = "0.11.0"
version = "0.11.1"
readme = "README.md"
license = { file = "LICENSE" }
requires-python = ">=3.10"

View File

@ -1,5 +1,5 @@
comfyui-frontend-package==1.37.11
comfyui-workflow-templates==0.8.24
comfyui-workflow-templates==0.8.27
comfyui-embedded-docs==0.4.0
torch
torchsde

View File

@ -0,0 +1,309 @@
# This is a Gradio example demonstrating using the websocket api and that also decodes preview images
# Gradio has a lot of idiosyncrasies and I'm definitely not an expert at coding for it
# I'm sure there are a million and one better ways to code this, but this works pretty well and should get you started
# I suggest taking the time to check any relevant comments throughout the code
# For more info on working with Gradio: https://www.gradio.app/docs
# Ensure that ComfyUI has latent previews enabled
# If you use Comfy Manager, make sure to set the preview type there because it will override --preview-method auto/latent2rgb/taesd launch flag settings
# Check or change the preview_method in "/custom_nodes/ComfyUI-Manager/config.ini"
# If you chose to install Gradio to your ComfyUI python venv, open a command prompt in this script_examples directory and run:
# ..\..\python_embeded\python.exe -s ..\script_examples\gradio_websockets_api_example.py
# To launch the app
import websocket #NOTE: websocket-client (https://github.com/websocket-client/websocket-client)
import uuid
import json
import urllib.request
import urllib.parse
from PIL import Image
import io
from io import BytesIO
import random
#If you want to use your local ComfyUI python installation, you'll need to navigate to your comfyui/python_embeded folder, open a cmd prompt and run "python.exe -m pip install gradio"
import gradio as gr
# adjust to your ComfyUI API settings
server_address = "127.0.0.1:8188"
client_id = str(uuid.uuid4())
#some globals to store previews, active state and progress
preview_image = None
active = False
interrupted = False
step_current = None
step_total = None
def interrupt_diffusion():
global interrupted, step_current, step_total
interrupted = True
step_current = None
step_total = None
req = urllib.request.Request("http://{}/interrupt".format(server_address), method='POST')
return urllib.request.urlopen(req)
def queue_prompt(prompt):
p = {"prompt": prompt, "client_id": client_id}
data = json.dumps(p).encode('utf-8')
req = urllib.request.Request("http://{}/prompt".format(server_address), data=data)
return json.loads(urllib.request.urlopen(req).read())
def get_image(filename, subfolder, folder_type):
data = {"filename": filename, "subfolder": subfolder, "type": folder_type}
url_values = urllib.parse.urlencode(data)
with urllib.request.urlopen("http://{}/view?{}".format(server_address, url_values)) as response:
return response.read()
def get_history(prompt_id):
with urllib.request.urlopen("http://{}/history/{}".format(server_address, prompt_id)) as response:
return json.loads(response.read())
def get_images(ws, prompt):
global preview_image, active, step_current, step_total
prompt_id = queue_prompt(prompt)['prompt_id']
output_images = {}
while True:
out = ws.recv()
if isinstance(out, str):
message = json.loads(out)
if message['type'] == 'executing':
data = message['data']
if data['node'] is None and data['prompt_id'] == prompt_id:
preview_image = None #clear these globals on completion just in case
step_current = None
step_total = None
active = False
break #Execution is done
elif message['type'] == 'progress':
data = message['data']
step_current = data['value']
step_total = data['max']
else:
bytesIO = BytesIO(out[8:])
preview_image = Image.open(bytesIO) # This is your preview in PIL image format
history = get_history(prompt_id)[prompt_id]
for node_id in history['outputs']:
node_output = history['outputs'][node_id]
images_output = []
if 'images' in node_output:
for image in node_output['images']:
image_data = get_image(image['filename'], image['subfolder'], image['type'])
images_output.append(image_data)
output_images[node_id] = images_output
return output_images
def get_prompt_images(prompt):
global preview_image
ws = websocket.WebSocket()
ws.connect("ws://{}/ws?clientId={}".format(server_address, client_id))
images = get_images(ws, prompt)
outputs = []
for node_id in images:
for image_data in images[node_id]:
image = Image.open(io.BytesIO(image_data))
outputs.append(image)
ws.close()
return outputs
############################################################################################################################
# Edit or add your own api workflow here. Make sure to enable dev mode in ComfyUI and to use the "Save(API Format)" option #
############################################################################################################################
prompt_text = """
{
"3": {
"class_type": "KSampler",
"inputs": {
"cfg": 8,
"denoise": 1,
"latent_image": [
"5",
0
],
"model": [
"4",
0
],
"negative": [
"7",
0
],
"positive": [
"6",
0
],
"sampler_name": "euler",
"scheduler": "normal",
"seed": -1,
"steps": 25
}
},
"4": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "sdxl_base_1.0_0.9vae.safetensors"
}
},
"5": {
"class_type": "EmptyLatentImage",
"inputs": {
"batch_size": 1,
"height": 1024,
"width": 1024
}
},
"6": {
"class_type": "CLIPTextEncode",
"inputs": {
"clip": [
"4",
1
],
"text": ""
}
},
"7": {
"class_type": "CLIPTextEncode",
"inputs": {
"clip": [
"4",
1
],
"text": ""
}
},
"8": {
"class_type": "VAEDecode",
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
}
},
"9": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
}
}
}
"""
prompt = json.loads(prompt_text)
# You can also use the following if you'd rather just load a json, make sure to comment out or remove the line above
# with open("/path/to/workflow.json", "r", encoding="utf-8") as f:
# prompt = json.load(f)
# start and stop timer are used for live updating the preview and progress
# no point in keeping the timer ticking if it's not currently generating
def start_timer():
global active
active = True
return gr.Timer(active=True)
def stop_timer():
global active
active = False
return gr.Timer(active=False)
def update_preview():
return gr.Image(value=preview_image)
# Gradio is somewhat finicky about multiple things trying to change the same output, so we switch between preview and image, while hiding the other
def window_preview():
return gr.Image(visible=False, value=None), gr.Image(visible=True, value=None), gr.Button(visible=False), gr.Button(visible=True, value="Stop: Busy")
def window_final():
if interrupted: #if we interrupted during the process, put things back to normal
return gr.Image(visible=True, value=None), gr.Image(visible=False), gr.Button(visible=True), gr.Button(visible=False)
else:
return gr.Image(visible=True), gr.Image(visible=False, value=None), gr.Button(visible=True), gr.Button(visible=False)
# Puts the progress on the stop button
def update_progress():
if step_current == 0 or step_current == None:
x = 0
else:
x = int(100 * (step_current / step_total))
if step_current == None or active == False:
message = "Stop: Busy"
else:
message = f"Stop: {step_current} / {step_total} steps {x}%"
return gr.Button(value=message)
# You will need to do a lot of editing here to match your workflow
def process(pos, neg, width, height, cfg, seed):
if seed <= -1:
seed = random.randint(0, 999999999)
prompt["4"]["inputs"]["ckpt_name"] = "sdxl_base_1.0_0.9vae.safetensors" #if you want to change the model, do it here
prompt["6"]["inputs"]["text"] = pos
prompt["7"]["inputs"]["text"] = neg
prompt["3"]["inputs"]["seed"] = seed
prompt["3"]["inputs"]["cfg"] = cfg
prompt["5"]["inputs"]["height"] = height
prompt["5"]["inputs"]["width"] = width
global interrupted
interrupted = False
images = get_prompt_images(prompt)
global active
active = False
try:
return gr.Image(value=images[0]) #not covering batch generations in this example because it requires setting the image output to a gr.Gallery, along with some other changes
except:
return gr.Image()
with gr.Blocks(analytics_enabled=False, fill_width=True, fill_height=True,) as example:
preview_timer = gr.Timer(value=1, active=False) # You can also lower the timer to something like 0.5 to get more frequent updates, but there's not really much point to it
with gr.Row():
with gr.Column():
with gr.Group():
user_prompt = gr.Textbox(label="Positive Prompt: ", value="orange cat, full moon, vibrant impressionistic painting, bright vivid rainbow of colors", lines=5, max_lines=20)
user_negativeprompt = gr.Textbox(label="Negative Prompt: ", value="text, watermark", lines=2, max_lines=10,)
with gr.Group():
with gr.Row():
user_width = gr.Slider(label="Width", minimum=512, maximum=1600, step=64, value=1152,)
user_height = gr.Slider(label="Height", minimum=512, maximum=1600, step=64, value=896,)
with gr.Row():
user_cfg = gr.Slider(label="CFG: ", minimum=1.0, maximum=16.0, step=0.1, value=4.5,)
user_seed = gr.Slider(label="Seed: (-1 for random)", minimum=-1, maximum=999999999, step=1, value=-1,)
generate = gr.Button("Generate", variant="primary")
stop = gr.Button("Stop", variant="stop", visible=False)
with gr.Column():
output_image = gr.Image(label="Image: ", type="pil", format="jpeg", interactive=False, visible=True)
output_preview = gr.Image(label="Preview: ", type="pil", format="jpeg", interactive=False, visible=False)
# On tick, we update the preview and then the progress
preview_timer.tick(
fn=update_preview, outputs=output_preview, show_progress="hidden").then(
fn=update_progress, outputs=stop, show_progress="hidden")
# On generate we switch windows/buttons, start the update tick, diffuse the image, stop the update tick and then finally, swap the image outputs/buttons back
generate.click(
fn=window_preview, outputs=[output_image, output_preview, generate, stop], show_progress="hidden").then(
fn=start_timer, outputs=preview_timer, show_progress="hidden").then(
fn=process, inputs=[user_prompt, user_negativeprompt, user_width, user_height, user_cfg, user_seed], outputs=output_image).then(
fn=stop_timer, outputs=preview_timer, show_progress="hidden").then(
fn=window_final, outputs=[output_image, output_preview, generate, stop], show_progress="hidden")
stop.click(fn=interrupt_diffusion, show_progress="hidden")
# Adjust settings to your needs https://www.gradio.app/docs/gradio/blocks#blocks-launch for more info
example.queue(max_size=2,) # how many users can queue up in line
example.launch(share=False, inbrowser=True, server_name="0.0.0.0", server_port=7860, enable_monitoring=False) # good for LAN-only setups

View File

@ -679,6 +679,8 @@ class PromptServer():
info['deprecated'] = True
if getattr(obj_class, "EXPERIMENTAL", False):
info['experimental'] = True
if getattr(obj_class, "DEV_ONLY", False):
info['dev_only'] = True
if hasattr(obj_class, 'API_NODE'):
info['api_node'] = obj_class.API_NODE