mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-04-25 09:52:35 +08:00
Some checks are pending
Build package / Build Test (3.10) (push) Waiting to run
Build package / Build Test (3.12) (push) Waiting to run
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Build package / Build Test (3.11) (push) Waiting to run
Build package / Build Test (3.13) (push) Waiting to run
Build package / Build Test (3.14) (push) Waiting to run
* fix: pin SQLAlchemy>=2.0 in requirements.txt (fixes #13036) (#13316) * Refactor io to IO in nodes_ace.py (#13485) * Bump comfyui-frontend-package to 1.42.12 (#13489) * Make the ltx audio vae more native. (#13486) * feat(api-nodes): add automatic downscaling of videos for ByteDance 2 nodes (#13465) * Support standalone LTXV audio VAEs (#13499) * [Partner Nodes] added 4K resolution for Veo models; added Veo 3 Lite model (#13330) * feat(api nodes): added 4K resolution for Veo models; added Veo 3 Lite model Signed-off-by: bigcat88 <bigcat88@icloud.com> * increase poll_interval from 5 to 9 --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com> * Bump comfyui-frontend-package to 1.42.14 (#13493) * Add gpt-image-2 as version option (#13501) * Allow logging in comfy app files. (#13505) * chore: update workflow templates to v0.9.59 (#13507) * fix(veo): reject 4K resolution for veo-3.0 models in Veo3VideoGenerationNode (#13504) The tooltip on the resolution input states that 4K is not available for veo-3.1-lite or veo-3.0 models, but the execute guard only rejected the lite combination. Selecting 4K with veo-3.0-generate-001 or veo-3.0-fast-generate-001 would fall through and hit the upstream API with an invalid request. Broaden the guard to match the documented behavior and update the error message accordingly. Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com> * feat: RIFE and FILM frame interpolation model support (CORE-29) (#13258) * initial RIFE support * Also support FILM * Better RAM usage, reduce FILM VRAM peak * Add model folder placeholder * Fix oom fallback frame loss * Remove torch.compile for now * Rename model input * Shorter input type name --------- * fix: use Parameter assignment for Stable_Zero123 cc_projection weights (fixes #13492) (#13518) On Windows with aimdo enabled, disable_weight_init.Linear uses lazy initialization that sets weight and bias to None to avoid unnecessary memory allocation. This caused a crash when copy_() was called on the None weight attribute in Stable_Zero123.__init__. Replace copy_() with direct torch.nn.Parameter assignment, which works correctly on both Windows (aimdo enabled) and other platforms. * Derive InterruptProcessingException from BaseException (#13523) * bump manager version to 4.2.1 (#13516) * ModelPatcherDynamic: force cast stray weights on comfy layers (#13487) the mixed_precision ops can have input_scale parameters that are used in tensor math but arent a weight or bias so dont get proper VRAM management. Treat these as force-castable parameters like the non comfy weight, random params are buffers already are. * Update logging level for invalid version format (#13526) * [Partner Nodes] add SD2 real human support (#13509) * feat(api-nodes): add SD2 real human support Signed-off-by: bigcat88 <bigcat88@icloud.com> * fix: add validation before uploading Assets Signed-off-by: bigcat88 <bigcat88@icloud.com> * Add asset_id and group_id displaying on the node Signed-off-by: bigcat88 <bigcat88@icloud.com> * extend poll_op to use instead of custom async cycle Signed-off-by: bigcat88 <bigcat88@icloud.com> * added the polling for the "Active" status after asset creation Signed-off-by: bigcat88 <bigcat88@icloud.com> * updated tooltip for group_id * allow usage of real human in the ByteDance2FirstLastFrame node * add reference count limits * corrected price in status when input assets contain video Signed-off-by: bigcat88 <bigcat88@icloud.com> --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> * feat: SAM (segment anything) 3.1 support (CORE-34) (#13408) * [Partner Nodes] GPTImage: fix price badges, add new resolutions (#13519) * fix(api-nodes): fixed price badges, add new resolutions Signed-off-by: bigcat88 <bigcat88@icloud.com> * proper calculate the total run cost when "n > 1" Signed-off-by: bigcat88 <bigcat88@icloud.com> --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> * chore: update workflow templates to v0.9.61 (#13533) * chore: update embedded docs to v0.4.4 (#13535) * add 4K resolution to Kling nodes (#13536) Signed-off-by: bigcat88 <bigcat88@icloud.com> * Fix LTXV Reference Audio node (#13531) * comfy-aimdo 0.2.14: Hotfix async allocator estimations (#13534) This was doing an over-estimate of VRAM used by the async allocator when lots of little small tensors were in play. Also change the versioning scheme to == so we can roll forward aimdo without worrying about stable regressions downstream in comfyUI core. * Disable sageattention for SAM3 (#13529) Causes Nans * execution: Add anti-cycle validation (#13169) Currently if the graph contains a cycle, the just inifitiate recursions, hits a catch all then throws a generic error against the output node that seeded the validation. Instead, fail the offending cycling mode chain and handlng it as an error in its own right. Co-authored-by: guill <jacob.e.segal@gmail.com> * chore: update workflow templates to v0.9.62 (#13539) --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> Co-authored-by: Octopus <liyuan851277048@icloud.com> Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com> Co-authored-by: Comfy Org PR Bot <snomiao+comfy-pr@gmail.com> Co-authored-by: Alexander Piskun <13381981+bigcat88@users.noreply.github.com> Co-authored-by: Jukka Seppänen <40791699+kijai@users.noreply.github.com> Co-authored-by: AustinMroz <austin@comfy.org> Co-authored-by: Daxiong (Lin) <contact@comfyui-wiki.com> Co-authored-by: Matt Miller <matt@miller-media.com> Co-authored-by: blepping <157360029+blepping@users.noreply.github.com> Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com> Co-authored-by: rattus <46076784+rattus128@users.noreply.github.com> Co-authored-by: guill <jacob.e.segal@gmail.com>
237 lines
6.7 KiB
Python
237 lines
6.7 KiB
Python
from typing import Literal
|
|
|
|
from pydantic import BaseModel, Field
|
|
|
|
|
|
class Text2ImageTaskCreationRequest(BaseModel):
|
|
model: str = Field(...)
|
|
prompt: str = Field(...)
|
|
response_format: str | None = Field("url")
|
|
size: str | None = Field(None)
|
|
seed: int | None = Field(0, ge=0, le=2147483647)
|
|
guidance_scale: float | None = Field(..., ge=1.0, le=10.0)
|
|
watermark: bool | None = Field(False)
|
|
|
|
|
|
class Seedream4Options(BaseModel):
|
|
max_images: int = Field(15)
|
|
|
|
|
|
class Seedream4TaskCreationRequest(BaseModel):
|
|
model: str = Field(...)
|
|
prompt: str = Field(...)
|
|
response_format: str = Field("url")
|
|
image: list[str] | None = Field(None, description="Image URLs")
|
|
size: str = Field(...)
|
|
seed: int = Field(..., ge=0, le=2147483647)
|
|
sequential_image_generation: str = Field("disabled")
|
|
sequential_image_generation_options: Seedream4Options = Field(Seedream4Options(max_images=15))
|
|
watermark: bool = Field(False)
|
|
output_format: str | None = None
|
|
|
|
|
|
class ImageTaskCreationResponse(BaseModel):
|
|
model: str = Field(...)
|
|
created: int = Field(..., description="Unix timestamp (in seconds) indicating time when the request was created.")
|
|
data: list = Field([], description="Contains information about the generated image(s).")
|
|
error: dict = Field({}, description="Contains `code` and `message` fields in case of error.")
|
|
|
|
|
|
class TaskTextContent(BaseModel):
|
|
type: str = Field("text")
|
|
text: str = Field(...)
|
|
|
|
|
|
class TaskImageContentUrl(BaseModel):
|
|
url: str = Field(...)
|
|
|
|
|
|
class TaskImageContent(BaseModel):
|
|
type: str = Field("image_url")
|
|
image_url: TaskImageContentUrl = Field(...)
|
|
role: Literal["first_frame", "last_frame", "reference_image"] | None = Field(None)
|
|
|
|
|
|
class TaskVideoContentUrl(BaseModel):
|
|
url: str = Field(...)
|
|
|
|
|
|
class TaskVideoContent(BaseModel):
|
|
type: str = Field("video_url")
|
|
video_url: TaskVideoContentUrl = Field(...)
|
|
role: str = Field("reference_video")
|
|
|
|
|
|
class TaskAudioContentUrl(BaseModel):
|
|
url: str = Field(...)
|
|
|
|
|
|
class TaskAudioContent(BaseModel):
|
|
type: str = Field("audio_url")
|
|
audio_url: TaskAudioContentUrl = Field(...)
|
|
role: str = Field("reference_audio")
|
|
|
|
|
|
class Text2VideoTaskCreationRequest(BaseModel):
|
|
model: str = Field(...)
|
|
content: list[TaskTextContent] = Field(..., min_length=1)
|
|
generate_audio: bool | None = Field(...)
|
|
|
|
|
|
class Image2VideoTaskCreationRequest(BaseModel):
|
|
model: str = Field(...)
|
|
content: list[TaskTextContent | TaskImageContent] = Field(..., min_length=2)
|
|
generate_audio: bool | None = Field(...)
|
|
|
|
|
|
class Seedance2TaskCreationRequest(BaseModel):
|
|
model: str = Field(...)
|
|
content: list[TaskTextContent | TaskImageContent | TaskVideoContent | TaskAudioContent] = Field(..., min_length=1)
|
|
generate_audio: bool | None = Field(None)
|
|
resolution: str | None = Field(None)
|
|
ratio: str | None = Field(None)
|
|
duration: int | None = Field(None, ge=4, le=15)
|
|
seed: int | None = Field(None, ge=0, le=2147483647)
|
|
watermark: bool | None = Field(None)
|
|
|
|
|
|
class TaskCreationResponse(BaseModel):
|
|
id: str = Field(...)
|
|
|
|
|
|
class TaskStatusError(BaseModel):
|
|
code: str = Field(...)
|
|
message: str = Field(...)
|
|
|
|
|
|
class TaskStatusResult(BaseModel):
|
|
video_url: str = Field(...)
|
|
|
|
|
|
class TaskStatusUsage(BaseModel):
|
|
completion_tokens: int = Field(0)
|
|
total_tokens: int = Field(0)
|
|
|
|
|
|
class TaskStatusResponse(BaseModel):
|
|
id: str = Field(...)
|
|
model: str = Field(...)
|
|
status: Literal["queued", "running", "cancelled", "succeeded", "failed"] = Field(...)
|
|
error: TaskStatusError | None = Field(None)
|
|
content: TaskStatusResult | None = Field(None)
|
|
usage: TaskStatusUsage | None = Field(None)
|
|
|
|
|
|
class GetAssetResponse(BaseModel):
|
|
id: str = Field(...)
|
|
name: str | None = Field(None)
|
|
url: str | None = Field(None)
|
|
asset_type: str = Field(...)
|
|
group_id: str = Field(...)
|
|
status: str = Field(...)
|
|
error: TaskStatusError | None = Field(None)
|
|
|
|
|
|
class SeedanceCreateVisualValidateSessionResponse(BaseModel):
|
|
session_id: str = Field(...)
|
|
h5_link: str = Field(...)
|
|
|
|
|
|
class SeedanceGetVisualValidateSessionResponse(BaseModel):
|
|
session_id: str = Field(...)
|
|
status: str = Field(...)
|
|
group_id: str | None = Field(None)
|
|
error_code: str | None = Field(None)
|
|
error_message: str | None = Field(None)
|
|
|
|
|
|
class SeedanceCreateAssetRequest(BaseModel):
|
|
group_id: str = Field(...)
|
|
url: str = Field(...)
|
|
asset_type: str = Field(...)
|
|
name: str | None = Field(None, max_length=64)
|
|
project_name: str | None = Field(None)
|
|
|
|
|
|
class SeedanceCreateAssetResponse(BaseModel):
|
|
asset_id: str = Field(...)
|
|
|
|
|
|
# Dollars per 1K tokens, keyed by (model_id, has_video_input).
|
|
SEEDANCE2_PRICE_PER_1K_TOKENS = {
|
|
("dreamina-seedance-2-0-260128", False): 0.007,
|
|
("dreamina-seedance-2-0-260128", True): 0.0043,
|
|
("dreamina-seedance-2-0-fast-260128", False): 0.0056,
|
|
("dreamina-seedance-2-0-fast-260128", True): 0.0033,
|
|
}
|
|
|
|
|
|
RECOMMENDED_PRESETS = [
|
|
("1024x1024 (1:1)", 1024, 1024),
|
|
("864x1152 (3:4)", 864, 1152),
|
|
("1152x864 (4:3)", 1152, 864),
|
|
("1280x720 (16:9)", 1280, 720),
|
|
("720x1280 (9:16)", 720, 1280),
|
|
("832x1248 (2:3)", 832, 1248),
|
|
("1248x832 (3:2)", 1248, 832),
|
|
("1512x648 (21:9)", 1512, 648),
|
|
("2048x2048 (1:1)", 2048, 2048),
|
|
("Custom", None, None),
|
|
]
|
|
|
|
RECOMMENDED_PRESETS_SEEDREAM_4 = [
|
|
("2048x2048 (1:1)", 2048, 2048),
|
|
("2304x1728 (4:3)", 2304, 1728),
|
|
("1728x2304 (3:4)", 1728, 2304),
|
|
("2560x1440 (16:9)", 2560, 1440),
|
|
("1440x2560 (9:16)", 1440, 2560),
|
|
("2496x1664 (3:2)", 2496, 1664),
|
|
("1664x2496 (2:3)", 1664, 2496),
|
|
("3024x1296 (21:9)", 3024, 1296),
|
|
("3072x3072 (1:1)", 3072, 3072),
|
|
("4096x4096 (1:1)", 4096, 4096),
|
|
("Custom", None, None),
|
|
]
|
|
|
|
# Seedance 2.0 reference video pixel count limits per model and output resolution.
|
|
SEEDANCE2_REF_VIDEO_PIXEL_LIMITS = {
|
|
"dreamina-seedance-2-0-260128": {
|
|
"480p": {"min": 409_600, "max": 927_408},
|
|
"720p": {"min": 409_600, "max": 927_408},
|
|
"1080p": {"min": 409_600, "max": 2_073_600},
|
|
},
|
|
"dreamina-seedance-2-0-fast-260128": {
|
|
"480p": {"min": 409_600, "max": 927_408},
|
|
"720p": {"min": 409_600, "max": 927_408},
|
|
},
|
|
}
|
|
|
|
# The time in this dictionary are given for 10 seconds duration.
|
|
VIDEO_TASKS_EXECUTION_TIME = {
|
|
"seedance-1-0-lite-t2v-250428": {
|
|
"480p": 40,
|
|
"720p": 60,
|
|
"1080p": 90,
|
|
},
|
|
"seedance-1-0-lite-i2v-250428": {
|
|
"480p": 40,
|
|
"720p": 60,
|
|
"1080p": 90,
|
|
},
|
|
"seedance-1-0-pro-250528": {
|
|
"480p": 70,
|
|
"720p": 85,
|
|
"1080p": 115,
|
|
},
|
|
"seedance-1-0-pro-fast-251015": {
|
|
"480p": 50,
|
|
"720p": 65,
|
|
"1080p": 100,
|
|
},
|
|
"seedance-1-5-pro-251215": {
|
|
"480p": 80,
|
|
"720p": 100,
|
|
"1080p": 150,
|
|
},
|
|
}
|