Compare commits

...

14 Commits

Author SHA1 Message Date
Svein Ove Aas
0c1f7bcd42
Merge ce445263e6 into 3be0175166 2026-02-03 21:03:21 +01:00
comfyanonymous
3be0175166 ComfyUI v0.12.1 2026-02-03 15:01:46 -05:00
comfyanonymous
b8315e66cb
Fix tiled vae for ace step 1.5 (#12253) 2026-02-03 14:40:45 -05:00
comfyanonymous
ab1050bec3
Support ace step 1.5 base model loras. (#12252) 2026-02-03 13:54:23 -05:00
Alexander Piskun
fb23935c11
feat(comfy_api): add basic 3D Model file types (#12129)
* feat(comfy_api): add basic 3D Model file types

* update Tripo nodes to use File3DGLB

* update Rodin3D nodes to use File3DGLB

* address PR review feedback:

- Rename File3D parameter 'path' to 'source'
- Convert File3D.data property to get_data()
- Make .glb extension check case-insensitive in nodes_rodin.py
- Restrict SaveGLB node to only accept File3DGLB

* Fixed a bug in the Meshy Rig and Animation nodes

* Fix backward compatability
2026-02-03 10:31:46 -08:00
comfyanonymous
85fc35e8fa
Fix mac issue. (#12250)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
2026-02-03 12:19:39 -05:00
comfyanonymous
223364743c
llama: cast logits as a comfy-weight (#12248)
This is using a different layers weight with .to(). Change it to use
the ops caster if the original layer is a comfy weight so that it picks
up dynamic_vram and async_offload functionality in full.

Co-authored-by: Rattus <rattus128@gmail.com>
2026-02-03 11:31:36 -05:00
comfyanonymous
affe881354
Fix some issues with mac. (#12247) 2026-02-03 11:07:04 -05:00
comfyanonymous
f5030e26fd
Add progress bar to ace step. (#12242)
Some checks failed
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
Build package / Build Test (3.10) (push) Has been cancelled
Build package / Build Test (3.11) (push) Has been cancelled
Build package / Build Test (3.12) (push) Has been cancelled
Build package / Build Test (3.13) (push) Has been cancelled
Build package / Build Test (3.14) (push) Has been cancelled
2026-02-03 04:09:30 -05:00
comfyanonymous
66e1b07402 ComfyUI v0.12.0 2026-02-03 02:20:59 -05:00
ComfyUI Wiki
be4345d1c9
chore: update workflow templates to v0.8.31 (#12239) 2026-02-02 23:08:43 -08:00
comfyanonymous
3c1a1a2df8
Basic support for the ace step 1.5 model. (#12237) 2026-02-03 00:06:18 -05:00
Alexander Piskun
ba5bf3f1a8
[API Nodes] HitPaw API nodes (#12117)
* feat(api-nodes): add HitPaw API nodes

* remove face_soft_2x model as not working

---------

Co-authored-by: Robin Huang <robin.j.huang@gmail.com>
2026-02-02 19:17:59 -08:00
Svein Ove Aas
ce445263e6 feat: Add Landlock LSM sandbox for filesystem isolation
Implements Linux Landlock sandboxing to restrict filesystem access when
ComfyUI is running. This provides defense-in-depth against malicious
custom nodes or workflows that attempt to access sensitive files.

How it works:
- Uses Linux Landlock LSM (kernel 5.13+) via direct syscalls
- Restricts write access to specific directories (output, input, temp, user)
- Restricts read access to only what's needed (codebase, models, system libs)
- Handles ABI versions 1-5, including IOCTL_DEV for GPU access on v5+
- Exits with error if --enable-landlock is set but Landlock unavailable

Write access granted to:
- ComfyUI output, input, temp, and user directories
- System temp directory (for torch/backends)
- SQLite database directory (if configured)
- Paths specified via --landlock-allow-writable

Read access granted to:
- ComfyUI codebase directory
- All configured model directories (including extra_model_paths.yaml)
- Python installation and site-packages
- System libraries (/usr, /lib, /lib64, /opt, /etc, /proc, /sys)
- /nix (on NixOS systems)
- /dev (with ioctl for GPU access)
- Paths specified via --landlock-allow-readable

Usage:
  python main.py --enable-landlock
  python main.py --enable-landlock --landlock-allow-writable /extra/dir
  python main.py --enable-landlock --landlock-allow-readable ~/.cache/huggingface

Requirements:
- Linux with kernel 5.13+ (fails with error on unsupported systems)
- Once enabled, restrictions cannot be lifted for the process lifetime
- Network access is not restricted (Landlock FS only)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-01 02:34:58 +00:00
35 changed files with 2763 additions and 176 deletions

View File

@ -47,6 +47,9 @@ parser.add_argument("--extra-model-paths-config", type=str, default=None, metava
parser.add_argument("--output-directory", type=str, default=None, help="Set the ComfyUI output directory. Overrides --base-directory.")
parser.add_argument("--temp-directory", type=str, default=None, help="Set the ComfyUI temp directory (default is in the ComfyUI directory). Overrides --base-directory.")
parser.add_argument("--input-directory", type=str, default=None, help="Set the ComfyUI input directory. Overrides --base-directory.")
parser.add_argument("--enable-landlock", action="store_true", help="Use the Linux Landlock LSM to restrict filesystem writes to known ComfyUI and cache directories.")
parser.add_argument("--landlock-allow-writable", action="append", default=[], metavar="PATH", help="Extra directories that remain writable when --enable-landlock is set. Can be provided multiple times.")
parser.add_argument("--landlock-allow-readable", action="append", default=[], metavar="PATH", help="Extra directories to allow read access when --enable-landlock is set. Can be provided multiple times.")
parser.add_argument("--auto-launch", action="store_true", help="Automatically launch ComfyUI in the default browser.")
parser.add_argument("--disable-auto-launch", action="store_true", help="Disable auto launching the browser.")
parser.add_argument("--cuda-device", type=int, default=None, metavar="DEVICE_ID", help="Set the id of the cuda device this instance will use. All other devices will not be visible.")

View File

@ -755,6 +755,10 @@ class ACEAudio(LatentFormat):
latent_channels = 8
latent_dimensions = 2
class ACEAudio15(LatentFormat):
latent_channels = 64
latent_dimensions = 1
class ChromaRadiance(LatentFormat):
latent_channels = 3
spacial_downscale_ratio = 1

1093
comfy/ldm/ace/ace_step15.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -332,6 +332,12 @@ def model_lora_keys_unet(model, key_map={}):
key_map["{}".format(key_lora)] = k
key_map["transformer.{}".format(key_lora)] = k
if isinstance(model, comfy.model_base.ACEStep15):
for k in sdk:
if k.startswith("diffusion_model.decoder.") and k.endswith(".weight"):
key_lora = k[len("diffusion_model.decoder."):-len(".weight")]
key_map["base_model.model.{}".format(key_lora)] = k # Official base model loras
return key_map

View File

@ -50,6 +50,7 @@ import comfy.ldm.omnigen.omnigen2
import comfy.ldm.qwen_image.model
import comfy.ldm.kandinsky5.model
import comfy.ldm.anima.model
import comfy.ldm.ace.ace_step15
import comfy.model_management
import comfy.patcher_extension
@ -1540,6 +1541,47 @@ class ACEStep(BaseModel):
out['lyrics_strength'] = comfy.conds.CONDConstant(kwargs.get("lyrics_strength", 1.0))
return out
class ACEStep15(BaseModel):
def __init__(self, model_config, model_type=ModelType.FLOW, device=None):
super().__init__(model_config, model_type, device=device, unet_model=comfy.ldm.ace.ace_step15.AceStepConditionGenerationModel)
def extra_conds(self, **kwargs):
out = super().extra_conds(**kwargs)
device = kwargs["device"]
cross_attn = kwargs.get("cross_attn", None)
if cross_attn is not None:
out['c_crossattn'] = comfy.conds.CONDRegular(cross_attn)
conditioning_lyrics = kwargs.get("conditioning_lyrics", None)
if cross_attn is not None:
out['lyric_embed'] = comfy.conds.CONDRegular(conditioning_lyrics)
refer_audio = kwargs.get("reference_audio_timbre_latents", None)
if refer_audio is None or len(refer_audio) == 0:
refer_audio = torch.tensor([[[-1.3672e-01, -1.5820e-01, 5.8594e-01, -5.7422e-01, 3.0273e-02,
2.7930e-01, -2.5940e-03, -2.0703e-01, -1.6113e-01, -1.4746e-01,
-2.7710e-02, -1.8066e-01, -2.9688e-01, 1.6016e+00, -2.6719e+00,
7.7734e-01, -1.3516e+00, -1.9434e-01, -7.1289e-02, -5.0938e+00,
2.4316e-01, 4.7266e-01, 4.6387e-02, -6.6406e-01, -2.1973e-01,
-6.7578e-01, -1.5723e-01, 9.5312e-01, -2.0020e-01, -1.7109e+00,
5.8984e-01, -5.7422e-01, 5.1562e-01, 2.8320e-01, 1.4551e-01,
-1.8750e-01, -5.9814e-02, 3.6719e-01, -1.0059e-01, -1.5723e-01,
2.0605e-01, -4.3359e-01, -8.2812e-01, 4.5654e-02, -6.6016e-01,
1.4844e-01, 9.4727e-02, 3.8477e-01, -1.2578e+00, -3.3203e-01,
-8.5547e-01, 4.3359e-01, 4.2383e-01, -8.9453e-01, -5.0391e-01,
-5.6152e-02, -2.9219e+00, -2.4658e-02, 5.0391e-01, 9.8438e-01,
7.2754e-02, -2.1582e-01, 6.3672e-01, 1.0000e+00]]], device=device).movedim(-1, 1).repeat(1, 1, 750)
else:
refer_audio = refer_audio[-1]
out['refer_audio'] = comfy.conds.CONDRegular(refer_audio)
audio_codes = kwargs.get("audio_codes", None)
if audio_codes is not None:
out['audio_codes'] = comfy.conds.CONDRegular(torch.tensor(audio_codes, device=device))
return out
class Omnigen2(BaseModel):
def __init__(self, model_config, model_type=ModelType.FLOW, device=None):
super().__init__(model_config, model_type, device=device, unet_model=comfy.ldm.omnigen.omnigen2.OmniGen2Transformer2DModel)

View File

@ -655,6 +655,11 @@ def detect_unet_config(state_dict, key_prefix, metadata=None):
dit_config["num_visual_blocks"] = count_blocks(state_dict_keys, '{}visual_transformer_blocks.'.format(key_prefix) + '{}.')
return dit_config
if '{}encoder.lyric_encoder.layers.0.input_layernorm.weight'.format(key_prefix) in state_dict_keys:
dit_config = {}
dit_config["audio_model"] = "ace1.5"
return dit_config
if '{}input_blocks.0.0.weight'.format(key_prefix) not in state_dict_keys:
return None

View File

@ -59,6 +59,7 @@ import comfy.text_encoders.kandinsky5
import comfy.text_encoders.jina_clip_2
import comfy.text_encoders.newbie
import comfy.text_encoders.anima
import comfy.text_encoders.ace15
import comfy.model_patcher
import comfy.lora
@ -452,6 +453,8 @@ class VAE:
self.extra_1d_channel = None
self.crop_input = True
self.audio_sample_rate = 44100
if config is None:
if "decoder.mid.block_1.mix_factor" in sd:
encoder_config = {'double_z': True, 'z_channels': 4, 'resolution': 256, 'in_channels': 3, 'out_ch': 3, 'ch': 128, 'ch_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_resolutions': [], 'dropout': 0.0}
@ -549,14 +552,27 @@ class VAE:
encoder_config={'target': "comfy.ldm.modules.diffusionmodules.model.Encoder", 'params': ddconfig},
decoder_config={'target': "comfy.ldm.modules.diffusionmodules.model.Decoder", 'params': ddconfig})
elif "decoder.layers.1.layers.0.beta" in sd:
self.first_stage_model = AudioOobleckVAE()
config = {}
param_key = None
self.upscale_ratio = 2048
self.downscale_ratio = 2048
if "decoder.layers.2.layers.1.weight_v" in sd:
param_key = "decoder.layers.2.layers.1.weight_v"
if "decoder.layers.2.layers.1.parametrizations.weight.original1" in sd:
param_key = "decoder.layers.2.layers.1.parametrizations.weight.original1"
if param_key is not None:
if sd[param_key].shape[-1] == 12:
config["strides"] = [2, 4, 4, 6, 10]
self.audio_sample_rate = 48000
self.upscale_ratio = 1920
self.downscale_ratio = 1920
self.first_stage_model = AudioOobleckVAE(**config)
self.memory_used_encode = lambda shape, dtype: (1000 * shape[2]) * model_management.dtype_size(dtype)
self.memory_used_decode = lambda shape, dtype: (1000 * shape[2] * 2048) * model_management.dtype_size(dtype)
self.latent_channels = 64
self.output_channels = 2
self.pad_channel_value = "replicate"
self.upscale_ratio = 2048
self.downscale_ratio = 2048
self.latent_dim = 1
self.process_output = lambda audio: audio
self.process_input = lambda audio: audio
@ -856,7 +872,7 @@ class VAE:
/ 3.0)
return output
def decode_tiled_1d(self, samples, tile_x=128, overlap=32):
def decode_tiled_1d(self, samples, tile_x=256, overlap=32):
if samples.ndim == 3:
decode_fn = lambda a: self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)).float()
else:
@ -1427,6 +1443,9 @@ def load_text_encoder_state_dicts(state_dicts=[], embedding_directory=None, clip
clip_data_jina = clip_data[0]
tokenizer_data["gemma_spiece_model"] = clip_data_gemma.get("spiece_model", None)
tokenizer_data["jina_spiece_model"] = clip_data_jina.get("spiece_model", None)
elif clip_type == CLIPType.ACE:
clip_target.clip = comfy.text_encoders.ace15.te(**llama_detect(clip_data))
clip_target.tokenizer = comfy.text_encoders.ace15.ACE15Tokenizer
else:
clip_target.clip = sdxl_clip.SDXLClipModel
clip_target.tokenizer = sdxl_clip.SDXLTokenizer

View File

@ -155,6 +155,8 @@ class SDClipModel(torch.nn.Module, ClipTokenWeightEncoder):
self.execution_device = options.get("execution_device", self.execution_device)
if isinstance(self.layer, list) or self.layer == "all":
pass
elif isinstance(layer_idx, list):
self.layer = layer_idx
elif layer_idx is None or abs(layer_idx) > self.num_layers:
self.layer = "last"
else:

View File

@ -24,6 +24,7 @@ import comfy.text_encoders.hunyuan_image
import comfy.text_encoders.kandinsky5
import comfy.text_encoders.z_image
import comfy.text_encoders.anima
import comfy.text_encoders.ace15
from . import supported_models_base
from . import latent_formats
@ -1596,6 +1597,38 @@ class Kandinsky5Image(Kandinsky5):
return supported_models_base.ClipTarget(comfy.text_encoders.kandinsky5.Kandinsky5TokenizerImage, comfy.text_encoders.kandinsky5.te(**hunyuan_detect))
models = [LotusD, Stable_Zero123, SD15_instructpix2pix, SD15, SD20, SD21UnclipL, SD21UnclipH, SDXL_instructpix2pix, SDXLRefiner, SDXL, SSD1B, KOALA_700M, KOALA_1B, Segmind_Vega, SD_X4Upscaler, Stable_Cascade_C, Stable_Cascade_B, SV3D_u, SV3D_p, SD3, StableAudio, AuraFlow, PixArtAlpha, PixArtSigma, HunyuanDiT, HunyuanDiT1, FluxInpaint, Flux, FluxSchnell, GenmoMochi, LTXV, LTXAV, HunyuanVideo15_SR_Distilled, HunyuanVideo15, HunyuanImage21Refiner, HunyuanImage21, HunyuanVideoSkyreelsI2V, HunyuanVideoI2V, HunyuanVideo, CosmosT2V, CosmosI2V, CosmosT2IPredict2, CosmosI2VPredict2, ZImage, Lumina2, WAN22_T2V, WAN21_T2V, WAN21_I2V, WAN21_FunControl2V, WAN21_Vace, WAN21_Camera, WAN22_Camera, WAN22_S2V, WAN21_HuMo, WAN22_Animate, Hunyuan3Dv2mini, Hunyuan3Dv2, Hunyuan3Dv2_1, HiDream, Chroma, ChromaRadiance, ACEStep, Omnigen2, QwenImage, Flux2, Kandinsky5Image, Kandinsky5, Anima]
class ACEStep15(supported_models_base.BASE):
unet_config = {
"audio_model": "ace1.5",
}
unet_extra_config = {
}
sampling_settings = {
"multiplier": 1.0,
"shift": 3.0,
}
latent_format = comfy.latent_formats.ACEAudio15
memory_usage_factor = 4.7
supported_inference_dtypes = [torch.bfloat16, torch.float32]
vae_key_prefix = ["vae."]
text_encoder_key_prefix = ["text_encoders."]
def get_model(self, state_dict, prefix="", device=None):
out = model_base.ACEStep15(self, device=device)
return out
def clip_target(self, state_dict={}):
pref = self.text_encoder_key_prefix[0]
hunyuan_detect = comfy.text_encoders.hunyuan_video.llama_detect(state_dict, "{}qwen3_2b.transformer.".format(pref))
return supported_models_base.ClipTarget(comfy.text_encoders.ace15.ACE15Tokenizer, comfy.text_encoders.ace15.te(**hunyuan_detect))
models = [LotusD, Stable_Zero123, SD15_instructpix2pix, SD15, SD20, SD21UnclipL, SD21UnclipH, SDXL_instructpix2pix, SDXLRefiner, SDXL, SSD1B, KOALA_700M, KOALA_1B, Segmind_Vega, SD_X4Upscaler, Stable_Cascade_C, Stable_Cascade_B, SV3D_u, SV3D_p, SD3, StableAudio, AuraFlow, PixArtAlpha, PixArtSigma, HunyuanDiT, HunyuanDiT1, FluxInpaint, Flux, FluxSchnell, GenmoMochi, LTXV, LTXAV, HunyuanVideo15_SR_Distilled, HunyuanVideo15, HunyuanImage21Refiner, HunyuanImage21, HunyuanVideoSkyreelsI2V, HunyuanVideoI2V, HunyuanVideo, CosmosT2V, CosmosI2V, CosmosT2IPredict2, CosmosI2VPredict2, ZImage, Lumina2, WAN22_T2V, WAN21_T2V, WAN21_I2V, WAN21_FunControl2V, WAN21_Vace, WAN21_Camera, WAN22_Camera, WAN22_S2V, WAN21_HuMo, WAN22_Animate, Hunyuan3Dv2mini, Hunyuan3Dv2, Hunyuan3Dv2_1, HiDream, Chroma, ChromaRadiance, ACEStep, ACEStep15, Omnigen2, QwenImage, Flux2, Kandinsky5Image, Kandinsky5, Anima]
models += [SVD_img2vid]

View File

@ -0,0 +1,223 @@
from .anima import Qwen3Tokenizer
import comfy.text_encoders.llama
from comfy import sd1_clip
import torch
import math
import comfy.utils
def sample_manual_loop_no_classes(
model,
ids=None,
paddings=[],
execution_dtype=None,
cfg_scale: float = 2.0,
temperature: float = 0.85,
top_p: float = 0.9,
top_k: int = None,
seed: int = 1,
min_tokens: int = 1,
max_new_tokens: int = 2048,
audio_start_id: int = 151669, # The cutoff ID for audio codes
eos_token_id: int = 151645,
):
device = model.execution_device
if execution_dtype is None:
if comfy.model_management.should_use_bf16(device):
execution_dtype = torch.bfloat16
else:
execution_dtype = torch.float32
embeds, attention_mask, num_tokens, embeds_info = model.process_tokens(ids, device)
for i, t in enumerate(paddings):
attention_mask[i, :t] = 0
attention_mask[i, t:] = 1
output_audio_codes = []
past_key_values = []
generator = torch.Generator(device=device)
generator.manual_seed(seed)
model_config = model.transformer.model.config
for x in range(model_config.num_hidden_layers):
past_key_values.append((torch.empty([embeds.shape[0], model_config.num_key_value_heads, embeds.shape[1] + min_tokens, model_config.head_dim], device=device, dtype=execution_dtype), torch.empty([embeds.shape[0], model_config.num_key_value_heads, embeds.shape[1] + min_tokens, model_config.head_dim], device=device, dtype=execution_dtype), 0))
progress_bar = comfy.utils.ProgressBar(max_new_tokens)
for step in range(max_new_tokens):
outputs = model.transformer(None, attention_mask, embeds=embeds.to(execution_dtype), num_tokens=num_tokens, intermediate_output=None, dtype=execution_dtype, embeds_info=embeds_info, past_key_values=past_key_values)
next_token_logits = model.transformer.logits(outputs[0])[:, -1]
past_key_values = outputs[2]
cond_logits = next_token_logits[0:1]
uncond_logits = next_token_logits[1:2]
cfg_logits = uncond_logits + cfg_scale * (cond_logits - uncond_logits)
if eos_token_id is not None and eos_token_id < audio_start_id and min_tokens < step:
eos_score = cfg_logits[:, eos_token_id].clone()
remove_logit_value = torch.finfo(cfg_logits.dtype).min
# Only generate audio tokens
cfg_logits[:, :audio_start_id] = remove_logit_value
if eos_token_id is not None and eos_token_id < audio_start_id and min_tokens < step:
cfg_logits[:, eos_token_id] = eos_score
if top_k is not None and top_k > 0:
top_k_vals, _ = torch.topk(cfg_logits, top_k)
min_val = top_k_vals[..., -1, None]
cfg_logits[cfg_logits < min_val] = remove_logit_value
if top_p is not None and top_p < 1.0:
sorted_logits, sorted_indices = torch.sort(cfg_logits, descending=True)
cumulative_probs = torch.cumsum(torch.softmax(sorted_logits, dim=-1), dim=-1)
sorted_indices_to_remove = cumulative_probs > top_p
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
cfg_logits[indices_to_remove] = remove_logit_value
if temperature > 0:
cfg_logits = cfg_logits / temperature
next_token = torch.multinomial(torch.softmax(cfg_logits, dim=-1), num_samples=1, generator=generator).squeeze(1)
else:
next_token = torch.argmax(cfg_logits, dim=-1)
token = next_token.item()
if token == eos_token_id:
break
embed, _, _, _ = model.process_tokens([[token]], device)
embeds = embed.repeat(2, 1, 1)
attention_mask = torch.cat([attention_mask, torch.ones((2, 1), device=device, dtype=attention_mask.dtype)], dim=1)
output_audio_codes.append(token - audio_start_id)
progress_bar.update_absolute(step)
return output_audio_codes
def generate_audio_codes(model, positive, negative, min_tokens=1, max_tokens=1024, seed=0):
cfg_scale = 2.0
positive = [[token for token, _ in inner_list] for inner_list in positive]
negative = [[token for token, _ in inner_list] for inner_list in negative]
positive = positive[0]
negative = negative[0]
neg_pad = 0
if len(negative) < len(positive):
neg_pad = (len(positive) - len(negative))
negative = [model.special_tokens["pad"]] * neg_pad + negative
pos_pad = 0
if len(negative) > len(positive):
pos_pad = (len(negative) - len(positive))
positive = [model.special_tokens["pad"]] * pos_pad + positive
paddings = [pos_pad, neg_pad]
return sample_manual_loop_no_classes(model, [positive, negative], paddings, cfg_scale=cfg_scale, seed=seed, min_tokens=min_tokens, max_new_tokens=max_tokens)
class ACE15Tokenizer(sd1_clip.SD1Tokenizer):
def __init__(self, embedding_directory=None, tokenizer_data={}):
super().__init__(embedding_directory=embedding_directory, tokenizer_data=tokenizer_data, name="qwen3_06b", tokenizer=Qwen3Tokenizer)
def tokenize_with_weights(self, text, return_word_ids=False, **kwargs):
out = {}
lyrics = kwargs.get("lyrics", "")
bpm = kwargs.get("bpm", 120)
duration = kwargs.get("duration", 120)
keyscale = kwargs.get("keyscale", "C major")
timesignature = kwargs.get("timesignature", 2)
language = kwargs.get("language", "en")
seed = kwargs.get("seed", 0)
duration = math.ceil(duration)
meta_lm = 'bpm: {}\nduration: {}\nkeyscale: {}\ntimesignature: {}'.format(bpm, duration, keyscale, timesignature)
lm_template = "<|im_start|>system\n# Instruction\nGenerate audio semantic tokens based on the given conditions:\n\n<|im_end|>\n<|im_start|>user\n# Caption\n{}\n{}\n<|im_end|>\n<|im_start|>assistant\n<think>\n{}\n</think>\n\n<|im_end|>\n"
meta_cap = '- bpm: {}\n- timesignature: {}\n- keyscale: {}\n- duration: {}\n'.format(bpm, timesignature, keyscale, duration)
out["lm_prompt"] = self.qwen3_06b.tokenize_with_weights(lm_template.format(text, lyrics, meta_lm), disable_weights=True)
out["lm_prompt_negative"] = self.qwen3_06b.tokenize_with_weights(lm_template.format(text, lyrics, ""), disable_weights=True)
out["lyrics"] = self.qwen3_06b.tokenize_with_weights("# Languages\n{}\n\n# Lyric{}<|endoftext|><|endoftext|>".format(language, lyrics), return_word_ids, disable_weights=True, **kwargs)
out["qwen3_06b"] = self.qwen3_06b.tokenize_with_weights("# Instruction\nGenerate audio semantic tokens based on the given conditions:\n\n# Caption\n{}# Metas\n{}<|endoftext|>\n<|endoftext|>".format(text, meta_cap), return_word_ids, **kwargs)
out["lm_metadata"] = {"min_tokens": duration * 5, "seed": seed}
return out
class Qwen3_06BModel(sd1_clip.SDClipModel):
def __init__(self, device="cpu", layer="last", layer_idx=None, dtype=None, attention_mask=True, model_options={}):
super().__init__(device=device, layer=layer, layer_idx=layer_idx, textmodel_json_config={}, dtype=dtype, special_tokens={"pad": 151643}, layer_norm_hidden_state=False, model_class=comfy.text_encoders.llama.Qwen3_06B_ACE15, enable_attention_masks=attention_mask, return_attention_masks=attention_mask, model_options=model_options)
class Qwen3_2B_ACE15(sd1_clip.SDClipModel):
def __init__(self, device="cpu", layer="last", layer_idx=None, dtype=None, attention_mask=True, model_options={}):
llama_quantization_metadata = model_options.get("llama_quantization_metadata", None)
if llama_quantization_metadata is not None:
model_options = model_options.copy()
model_options["quantization_metadata"] = llama_quantization_metadata
super().__init__(device=device, layer=layer, layer_idx=layer_idx, textmodel_json_config={}, dtype=dtype, special_tokens={"pad": 151643}, layer_norm_hidden_state=False, model_class=comfy.text_encoders.llama.Qwen3_2B_ACE15_lm, enable_attention_masks=attention_mask, return_attention_masks=attention_mask, model_options=model_options)
class ACE15TEModel(torch.nn.Module):
def __init__(self, device="cpu", dtype=None, dtype_llama=None, model_options={}):
super().__init__()
if dtype_llama is None:
dtype_llama = dtype
self.qwen3_06b = Qwen3_06BModel(device=device, dtype=dtype, model_options=model_options)
self.qwen3_2b = Qwen3_2B_ACE15(device=device, dtype=dtype_llama, model_options=model_options)
self.dtypes = set([dtype, dtype_llama])
def encode_token_weights(self, token_weight_pairs):
token_weight_pairs_base = token_weight_pairs["qwen3_06b"]
token_weight_pairs_lyrics = token_weight_pairs["lyrics"]
self.qwen3_06b.set_clip_options({"layer": None})
base_out, _, extra = self.qwen3_06b.encode_token_weights(token_weight_pairs_base)
self.qwen3_06b.set_clip_options({"layer": [0]})
lyrics_embeds, _, extra_l = self.qwen3_06b.encode_token_weights(token_weight_pairs_lyrics)
lm_metadata = token_weight_pairs["lm_metadata"]
audio_codes = generate_audio_codes(self.qwen3_2b, token_weight_pairs["lm_prompt"], token_weight_pairs["lm_prompt_negative"], min_tokens=lm_metadata["min_tokens"], max_tokens=lm_metadata["min_tokens"], seed=lm_metadata["seed"])
return base_out, None, {"conditioning_lyrics": lyrics_embeds[:, 0], "audio_codes": [audio_codes]}
def set_clip_options(self, options):
self.qwen3_06b.set_clip_options(options)
self.qwen3_2b.set_clip_options(options)
def reset_clip_options(self):
self.qwen3_06b.reset_clip_options()
self.qwen3_2b.reset_clip_options()
def load_sd(self, sd):
if "model.layers.0.post_attention_layernorm.weight" in sd:
shape = sd["model.layers.0.post_attention_layernorm.weight"].shape
if shape[0] == 1024:
return self.qwen3_06b.load_sd(sd)
else:
return self.qwen3_2b.load_sd(sd)
def memory_estimation_function(self, token_weight_pairs, device=None):
lm_metadata = token_weight_pairs["lm_metadata"]
constant = 0.4375
if comfy.model_management.should_use_bf16(device):
constant *= 0.5
token_weight_pairs = token_weight_pairs.get("lm_prompt", [])
num_tokens = sum(map(lambda a: len(a), token_weight_pairs))
num_tokens += lm_metadata['min_tokens']
return num_tokens * constant * 1024 * 1024
def te(dtype_llama=None, llama_quantization_metadata=None):
class ACE15TEModel_(ACE15TEModel):
def __init__(self, device="cpu", dtype=None, model_options={}):
if llama_quantization_metadata is not None:
model_options = model_options.copy()
model_options["llama_quantization_metadata"] = llama_quantization_metadata
super().__init__(device=device, dtype_llama=dtype_llama, dtype=dtype, model_options=model_options)
return ACE15TEModel_

View File

@ -6,6 +6,7 @@ import math
from comfy.ldm.modules.attention import optimized_attention_for_device
import comfy.model_management
import comfy.ops
import comfy.ldm.common_dit
import comfy.clip_model
@ -103,6 +104,52 @@ class Qwen3_06BConfig:
final_norm: bool = True
lm_head: bool = False
@dataclass
class Qwen3_06B_ACE15_Config:
vocab_size: int = 151669
hidden_size: int = 1024
intermediate_size: int = 3072
num_hidden_layers: int = 28
num_attention_heads: int = 16
num_key_value_heads: int = 8
max_position_embeddings: int = 32768
rms_norm_eps: float = 1e-6
rope_theta: float = 1000000.0
transformer_type: str = "llama"
head_dim = 128
rms_norm_add = False
mlp_activation = "silu"
qkv_bias = False
rope_dims = None
q_norm = "gemma3"
k_norm = "gemma3"
rope_scale = None
final_norm: bool = True
lm_head: bool = False
@dataclass
class Qwen3_2B_ACE15_lm_Config:
vocab_size: int = 217204
hidden_size: int = 2048
intermediate_size: int = 6144
num_hidden_layers: int = 28
num_attention_heads: int = 16
num_key_value_heads: int = 8
max_position_embeddings: int = 40960
rms_norm_eps: float = 1e-6
rope_theta: float = 1000000.0
transformer_type: str = "llama"
head_dim = 128
rms_norm_add = False
mlp_activation = "silu"
qkv_bias = False
rope_dims = None
q_norm = "gemma3"
k_norm = "gemma3"
rope_scale = None
final_norm: bool = True
lm_head: bool = False
@dataclass
class Qwen3_4BConfig:
vocab_size: int = 151936
@ -581,10 +628,10 @@ class Llama2_(nn.Module):
mask = None
if attention_mask is not None:
mask = 1.0 - attention_mask.to(x.dtype).reshape((attention_mask.shape[0], 1, -1, attention_mask.shape[-1])).expand(attention_mask.shape[0], 1, seq_len, attention_mask.shape[-1])
mask = mask.masked_fill(mask.to(torch.bool), float("-inf"))
mask = mask.masked_fill(mask.to(torch.bool), torch.finfo(x.dtype).min)
if seq_len > 1:
causal_mask = torch.empty(past_len + seq_len, past_len + seq_len, dtype=x.dtype, device=x.device).fill_(float("-inf")).triu_(1)
causal_mask = torch.empty(past_len + seq_len, past_len + seq_len, dtype=x.dtype, device=x.device).fill_(torch.finfo(x.dtype).min).triu_(1)
if mask is not None:
mask += causal_mask
else:
@ -729,6 +776,39 @@ class Qwen3_06B(BaseLlama, torch.nn.Module):
self.model = Llama2_(config, device=device, dtype=dtype, ops=operations)
self.dtype = dtype
class Qwen3_06B_ACE15(BaseLlama, torch.nn.Module):
def __init__(self, config_dict, dtype, device, operations):
super().__init__()
config = Qwen3_06B_ACE15_Config(**config_dict)
self.num_layers = config.num_hidden_layers
self.model = Llama2_(config, device=device, dtype=dtype, ops=operations)
self.dtype = dtype
class Qwen3_2B_ACE15_lm(BaseLlama, torch.nn.Module):
def __init__(self, config_dict, dtype, device, operations):
super().__init__()
config = Qwen3_2B_ACE15_lm_Config(**config_dict)
self.num_layers = config.num_hidden_layers
self.model = Llama2_(config, device=device, dtype=dtype, ops=operations)
self.dtype = dtype
def logits(self, x):
input = x[:, -1:]
module = self.model.embed_tokens
offload_stream = None
if module.comfy_cast_weights:
weight, _, offload_stream = comfy.ops.cast_bias_weight(module, input, offloadable=True)
else:
weight = self.model.embed_tokens.weight.to(x)
x = torch.nn.functional.linear(input, weight, None)
comfy.ops.uncast_bias_weight(module, weight, None, offload_stream)
return x
class Qwen3_4B(BaseLlama, torch.nn.Module):
def __init__(self, config_dict, dtype, device, operations):
super().__init__()

View File

@ -7,7 +7,7 @@ from comfy_api.internal.singleton import ProxiedSingleton
from comfy_api.internal.async_to_sync import create_sync_class
from ._input import ImageInput, AudioInput, MaskInput, LatentInput, VideoInput
from ._input_impl import VideoFromFile, VideoFromComponents
from ._util import VideoCodec, VideoContainer, VideoComponents, MESH, VOXEL
from ._util import VideoCodec, VideoContainer, VideoComponents, MESH, VOXEL, File3D
from . import _io_public as io
from . import _ui_public as ui
from comfy_execution.utils import get_executing_context
@ -105,6 +105,7 @@ class Types:
VideoComponents = VideoComponents
MESH = MESH
VOXEL = VOXEL
File3D = File3D
ComfyAPI = ComfyAPI_latest

View File

@ -27,7 +27,7 @@ if TYPE_CHECKING:
from comfy_api.internal import (_ComfyNodeInternal, _NodeOutputInternal, classproperty, copy_class, first_real_override, is_class,
prune_dict, shallow_clone_class)
from comfy_execution.graph_utils import ExecutionBlocker
from ._util import MESH, VOXEL, SVG as _SVG
from ._util import MESH, VOXEL, SVG as _SVG, File3D
class FolderType(str, Enum):
@ -667,6 +667,49 @@ class Voxel(ComfyTypeIO):
class Mesh(ComfyTypeIO):
Type = MESH
@comfytype(io_type="FILE_3D")
class File3DAny(ComfyTypeIO):
"""General 3D file type - accepts any supported 3D format."""
Type = File3D
@comfytype(io_type="FILE_3D_GLB")
class File3DGLB(ComfyTypeIO):
"""GLB format 3D file - binary glTF, best for web and cross-platform."""
Type = File3D
@comfytype(io_type="FILE_3D_GLTF")
class File3DGLTF(ComfyTypeIO):
"""GLTF format 3D file - JSON-based glTF with external resources."""
Type = File3D
@comfytype(io_type="FILE_3D_FBX")
class File3DFBX(ComfyTypeIO):
"""FBX format 3D file - best for game engines and animation."""
Type = File3D
@comfytype(io_type="FILE_3D_OBJ")
class File3DOBJ(ComfyTypeIO):
"""OBJ format 3D file - simple geometry format."""
Type = File3D
@comfytype(io_type="FILE_3D_STL")
class File3DSTL(ComfyTypeIO):
"""STL format 3D file - best for 3D printing."""
Type = File3D
@comfytype(io_type="FILE_3D_USDZ")
class File3DUSDZ(ComfyTypeIO):
"""USDZ format 3D file - Apple AR format."""
Type = File3D
@comfytype(io_type="HOOKS")
class Hooks(ComfyTypeIO):
if TYPE_CHECKING:
@ -2037,6 +2080,13 @@ __all__ = [
"LossMap",
"Voxel",
"Mesh",
"File3DAny",
"File3DGLB",
"File3DGLTF",
"File3DFBX",
"File3DOBJ",
"File3DSTL",
"File3DUSDZ",
"Hooks",
"HookKeyframes",
"TimestepsRange",

View File

@ -1,5 +1,5 @@
from .video_types import VideoContainer, VideoCodec, VideoComponents
from .geometry_types import VOXEL, MESH
from .geometry_types import VOXEL, MESH, File3D
from .image_types import SVG
__all__ = [
@ -9,5 +9,6 @@ __all__ = [
"VideoComponents",
"VOXEL",
"MESH",
"File3D",
"SVG",
]

View File

@ -1,3 +1,8 @@
import shutil
from io import BytesIO
from pathlib import Path
from typing import IO
import torch
@ -10,3 +15,75 @@ class MESH:
def __init__(self, vertices: torch.Tensor, faces: torch.Tensor):
self.vertices = vertices
self.faces = faces
class File3D:
"""Class representing a 3D file from a file path or binary stream.
Supports both disk-backed (file path) and memory-backed (BytesIO) storage.
"""
def __init__(self, source: str | IO[bytes], file_format: str = ""):
self._source = source
self._format = file_format or self._infer_format()
def _infer_format(self) -> str:
if isinstance(self._source, str):
return Path(self._source).suffix.lstrip(".").lower()
return ""
@property
def format(self) -> str:
return self._format
@format.setter
def format(self, value: str) -> None:
self._format = value.lstrip(".").lower() if value else ""
@property
def is_disk_backed(self) -> bool:
return isinstance(self._source, str)
def get_source(self) -> str | IO[bytes]:
if isinstance(self._source, str):
return self._source
if hasattr(self._source, "seek"):
self._source.seek(0)
return self._source
def get_data(self) -> BytesIO:
if isinstance(self._source, str):
with open(self._source, "rb") as f:
result = BytesIO(f.read())
return result
if hasattr(self._source, "seek"):
self._source.seek(0)
if isinstance(self._source, BytesIO):
return self._source
return BytesIO(self._source.read())
def save_to(self, path: str) -> str:
dest = Path(path)
dest.parent.mkdir(parents=True, exist_ok=True)
if isinstance(self._source, str):
if Path(self._source).resolve() != dest.resolve():
shutil.copy2(self._source, dest)
else:
if hasattr(self._source, "seek"):
self._source.seek(0)
with open(dest, "wb") as f:
f.write(self._source.read())
return str(dest)
def get_bytes(self) -> bytes:
if isinstance(self._source, str):
return Path(self._source).read_bytes()
if hasattr(self._source, "seek"):
self._source.seek(0)
return self._source.read()
def __repr__(self) -> str:
if isinstance(self._source, str):
return f"File3D(source={self._source!r}, format={self._format!r})"
return f"File3D(<stream>, format={self._format!r})"

View File

@ -0,0 +1,51 @@
from typing import TypedDict
from pydantic import BaseModel, Field
class InputVideoModel(TypedDict):
model: str
resolution: str
class ImageEnhanceTaskCreateRequest(BaseModel):
model_name: str = Field(...)
img_url: str = Field(...)
extension: str = Field(".png")
exif: bool = Field(False)
DPI: int | None = Field(None)
class VideoEnhanceTaskCreateRequest(BaseModel):
video_url: str = Field(...)
extension: str = Field(".mp4")
model_name: str | None = Field(...)
resolution: list[int] = Field(..., description="Target resolution [width, height]")
original_resolution: list[int] = Field(..., description="Original video resolution [width, height]")
class TaskCreateDataResponse(BaseModel):
job_id: str = Field(...)
consume_coins: int | None = Field(None)
class TaskStatusPollRequest(BaseModel):
job_id: str = Field(...)
class TaskCreateResponse(BaseModel):
code: int = Field(...)
message: str = Field(...)
data: TaskCreateDataResponse | None = Field(None)
class TaskStatusDataResponse(BaseModel):
job_id: str = Field(...)
status: str = Field(...)
res_url: str = Field("")
class TaskStatusResponse(BaseModel):
code: int = Field(...)
message: str = Field(...)
data: TaskStatusDataResponse = Field(...)

View File

@ -109,14 +109,19 @@ class MeshyTextureRequest(BaseModel):
class MeshyModelsUrls(BaseModel):
glb: str = Field("")
fbx: str = Field("")
usdz: str = Field("")
obj: str = Field("")
class MeshyRiggedModelsUrls(BaseModel):
rigged_character_glb_url: str = Field("")
rigged_character_fbx_url: str = Field("")
class MeshyAnimatedModelsUrls(BaseModel):
animation_glb_url: str = Field("")
animation_fbx_url: str = Field("")
class MeshyResultTextureUrls(BaseModel):

View File

@ -0,0 +1,342 @@
import math
from typing_extensions import override
from comfy_api.latest import IO, ComfyExtension, Input
from comfy_api_nodes.apis.hitpaw import (
ImageEnhanceTaskCreateRequest,
InputVideoModel,
TaskCreateDataResponse,
TaskCreateResponse,
TaskStatusPollRequest,
TaskStatusResponse,
VideoEnhanceTaskCreateRequest,
)
from comfy_api_nodes.util import (
ApiEndpoint,
download_url_to_image_tensor,
download_url_to_video_output,
downscale_image_tensor,
get_image_dimensions,
poll_op,
sync_op,
upload_image_to_comfyapi,
upload_video_to_comfyapi,
validate_video_duration,
)
VIDEO_MODELS_MODELS_MAP = {
"Portrait Restore Model (1x)": "portrait_restore_1x",
"Portrait Restore Model (2x)": "portrait_restore_2x",
"General Restore Model (1x)": "general_restore_1x",
"General Restore Model (2x)": "general_restore_2x",
"General Restore Model (4x)": "general_restore_4x",
"Ultra HD Model (2x)": "ultrahd_restore_2x",
"Generative Model (1x)": "generative_1x",
}
# Resolution name to target dimension (shorter side) in pixels
RESOLUTION_TARGET_MAP = {
"720p": 720,
"1080p": 1080,
"2K/QHD": 1440,
"4K/UHD": 2160,
"8K": 4320,
}
# Square (1:1) resolutions use standard square dimensions
RESOLUTION_SQUARE_MAP = {
"720p": 720,
"1080p": 1080,
"2K/QHD": 1440,
"4K/UHD": 2048, # DCI 4K square
"8K": 4096, # DCI 8K square
}
# Models with limited resolution support (no 8K)
LIMITED_RESOLUTION_MODELS = {"Generative Model (1x)"}
# Resolution options for different model types
RESOLUTIONS_LIMITED = ["original", "720p", "1080p", "2K/QHD", "4K/UHD"]
RESOLUTIONS_FULL = ["original", "720p", "1080p", "2K/QHD", "4K/UHD", "8K"]
# Maximum output resolution in pixels
MAX_PIXELS_GENERATIVE = 32_000_000
MAX_MP_GENERATIVE = MAX_PIXELS_GENERATIVE // 1_000_000
class HitPawGeneralImageEnhance(IO.ComfyNode):
@classmethod
def define_schema(cls):
return IO.Schema(
node_id="HitPawGeneralImageEnhance",
display_name="HitPaw General Image Enhance",
category="api node/image/HitPaw",
description="Upscale low-resolution images to super-resolution, eliminate artifacts and noise. "
f"Maximum output: {MAX_MP_GENERATIVE} megapixels.",
inputs=[
IO.Combo.Input("model", options=["generative_portrait", "generative"]),
IO.Image.Input("image"),
IO.Combo.Input("upscale_factor", options=[1, 2, 4]),
IO.Boolean.Input(
"auto_downscale",
default=False,
tooltip="Automatically downscale input image if output would exceed the limit.",
),
],
outputs=[
IO.Image.Output(),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
IO.Hidden.unique_id,
],
is_api_node=True,
price_badge=IO.PriceBadge(
depends_on=IO.PriceBadgeDepends(widgets=["model"]),
expr="""
(
$prices := {
"generative_portrait": {"min": 0.02, "max": 0.06},
"generative": {"min": 0.05, "max": 0.15}
};
$price := $lookup($prices, widgets.model);
{
"type": "range_usd",
"min_usd": $price.min,
"max_usd": $price.max
}
)
""",
),
)
@classmethod
async def execute(
cls,
model: str,
image: Input.Image,
upscale_factor: int,
auto_downscale: bool,
) -> IO.NodeOutput:
height, width = get_image_dimensions(image)
requested_scale = upscale_factor
output_pixels = height * width * requested_scale * requested_scale
if output_pixels > MAX_PIXELS_GENERATIVE:
if auto_downscale:
input_pixels = width * height
scale = 1
max_input_pixels = MAX_PIXELS_GENERATIVE
for candidate in [4, 2, 1]:
if candidate > requested_scale:
continue
scale_output_pixels = input_pixels * candidate * candidate
if scale_output_pixels <= MAX_PIXELS_GENERATIVE:
scale = candidate
max_input_pixels = None
break
# Check if we can downscale input by at most 2x to fit
downscale_ratio = math.sqrt(scale_output_pixels / MAX_PIXELS_GENERATIVE)
if downscale_ratio <= 2.0:
scale = candidate
max_input_pixels = MAX_PIXELS_GENERATIVE // (candidate * candidate)
break
if max_input_pixels is not None:
image = downscale_image_tensor(image, total_pixels=max_input_pixels)
upscale_factor = scale
else:
output_width = width * requested_scale
output_height = height * requested_scale
raise ValueError(
f"Output size ({output_width}x{output_height} = {output_pixels:,} pixels) "
f"exceeds maximum allowed size of {MAX_PIXELS_GENERATIVE:,} pixels ({MAX_MP_GENERATIVE}MP). "
f"Enable auto_downscale or use a smaller input image or a lower upscale factor."
)
initial_res = await sync_op(
cls,
ApiEndpoint(path="/proxy/hitpaw/api/photo-enhancer", method="POST"),
response_model=TaskCreateResponse,
data=ImageEnhanceTaskCreateRequest(
model_name=f"{model}_{upscale_factor}x",
img_url=await upload_image_to_comfyapi(cls, image, total_pixels=None),
),
wait_label="Creating task",
final_label_on_success="Task created",
)
if initial_res.code != 200:
raise ValueError(f"Task creation failed with code {initial_res.code}: {initial_res.message}")
request_price = initial_res.data.consume_coins / 1000
final_response = await poll_op(
cls,
ApiEndpoint(path="/proxy/hitpaw/api/task-status", method="POST"),
data=TaskCreateDataResponse(job_id=initial_res.data.job_id),
response_model=TaskStatusResponse,
status_extractor=lambda x: x.data.status,
price_extractor=lambda x: request_price,
poll_interval=10.0,
max_poll_attempts=480,
)
return IO.NodeOutput(await download_url_to_image_tensor(final_response.data.res_url))
class HitPawVideoEnhance(IO.ComfyNode):
@classmethod
def define_schema(cls):
model_options = []
for model_name in VIDEO_MODELS_MODELS_MAP:
if model_name in LIMITED_RESOLUTION_MODELS:
resolutions = RESOLUTIONS_LIMITED
else:
resolutions = RESOLUTIONS_FULL
model_options.append(
IO.DynamicCombo.Option(
model_name,
[IO.Combo.Input("resolution", options=resolutions)],
)
)
return IO.Schema(
node_id="HitPawVideoEnhance",
display_name="HitPaw Video Enhance",
category="api node/video/HitPaw",
description="Upscale low-resolution videos to high resolution, eliminate artifacts and noise. "
"Prices shown are per second of video.",
inputs=[
IO.DynamicCombo.Input("model", options=model_options),
IO.Video.Input("video"),
],
outputs=[
IO.Video.Output(),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
IO.Hidden.unique_id,
],
is_api_node=True,
price_badge=IO.PriceBadge(
depends_on=IO.PriceBadgeDepends(widgets=["model", "model.resolution"]),
expr="""
(
$m := $lookup(widgets, "model");
$res := $lookup(widgets, "model.resolution");
$standard_model_prices := {
"original": {"min": 0.01, "max": 0.198},
"720p": {"min": 0.01, "max": 0.06},
"1080p": {"min": 0.015, "max": 0.09},
"2k/qhd": {"min": 0.02, "max": 0.117},
"4k/uhd": {"min": 0.025, "max": 0.152},
"8k": {"min": 0.033, "max": 0.198}
};
$ultra_hd_model_prices := {
"original": {"min": 0.015, "max": 0.264},
"720p": {"min": 0.015, "max": 0.092},
"1080p": {"min": 0.02, "max": 0.12},
"2k/qhd": {"min": 0.026, "max": 0.156},
"4k/uhd": {"min": 0.034, "max": 0.203},
"8k": {"min": 0.044, "max": 0.264}
};
$generative_model_prices := {
"original": {"min": 0.015, "max": 0.338},
"720p": {"min": 0.008, "max": 0.090},
"1080p": {"min": 0.05, "max": 0.15},
"2k/qhd": {"min": 0.038, "max": 0.225},
"4k/uhd": {"min": 0.056, "max": 0.338}
};
$prices := $contains($m, "ultra hd") ? $ultra_hd_model_prices :
$contains($m, "generative") ? $generative_model_prices :
$standard_model_prices;
$price := $lookup($prices, $res);
{
"type": "range_usd",
"min_usd": $price.min,
"max_usd": $price.max,
"format": {"approximate": true, "suffix": "/second"}
}
)
""",
),
)
@classmethod
async def execute(
cls,
model: InputVideoModel,
video: Input.Video,
) -> IO.NodeOutput:
validate_video_duration(video, min_duration=0.5, max_duration=60 * 60)
resolution = model["resolution"]
src_width, src_height = video.get_dimensions()
if resolution == "original":
output_width = src_width
output_height = src_height
else:
if src_width == src_height:
target_size = RESOLUTION_SQUARE_MAP[resolution]
if target_size < src_width:
raise ValueError(
f"Selected resolution {resolution} ({target_size}x{target_size}) is smaller than "
f"the input video ({src_width}x{src_height}). Please select a higher resolution or 'original'."
)
output_width = target_size
output_height = target_size
else:
min_dimension = min(src_width, src_height)
target_size = RESOLUTION_TARGET_MAP[resolution]
if target_size < min_dimension:
raise ValueError(
f"Selected resolution {resolution} ({target_size}p) is smaller than "
f"the input video's shorter dimension ({min_dimension}p). "
f"Please select a higher resolution or 'original'."
)
if src_width > src_height:
output_height = target_size
output_width = int(target_size * (src_width / src_height))
else:
output_width = target_size
output_height = int(target_size * (src_height / src_width))
initial_res = await sync_op(
cls,
ApiEndpoint(path="/proxy/hitpaw/api/video-enhancer", method="POST"),
response_model=TaskCreateResponse,
data=VideoEnhanceTaskCreateRequest(
video_url=await upload_video_to_comfyapi(cls, video),
resolution=[output_width, output_height],
original_resolution=[src_width, src_height],
model_name=VIDEO_MODELS_MODELS_MAP[model["model"]],
),
wait_label="Creating task",
final_label_on_success="Task created",
)
request_price = initial_res.data.consume_coins / 1000
if initial_res.code != 200:
raise ValueError(f"Task creation failed with code {initial_res.code}: {initial_res.message}")
final_response = await poll_op(
cls,
ApiEndpoint(path="/proxy/hitpaw/api/task-status", method="POST"),
data=TaskStatusPollRequest(job_id=initial_res.data.job_id),
response_model=TaskStatusResponse,
status_extractor=lambda x: x.data.status,
price_extractor=lambda x: request_price,
poll_interval=10.0,
max_poll_attempts=320,
)
return IO.NodeOutput(await download_url_to_video_output(final_response.data.res_url))
class HitPawExtension(ComfyExtension):
@override
async def get_node_list(self) -> list[type[IO.ComfyNode]]:
return [
HitPawGeneralImageEnhance,
HitPawVideoEnhance,
]
async def comfy_entrypoint() -> HitPawExtension:
return HitPawExtension()

View File

@ -1,5 +1,3 @@
import os
from typing_extensions import override
from comfy_api.latest import IO, ComfyExtension, Input
@ -14,7 +12,7 @@ from comfy_api_nodes.apis.hunyuan3d import (
)
from comfy_api_nodes.util import (
ApiEndpoint,
download_url_to_bytesio,
download_url_to_file_3d,
downscale_image_tensor_by_max_side,
poll_op,
sync_op,
@ -22,14 +20,13 @@ from comfy_api_nodes.util import (
validate_image_dimensions,
validate_string,
)
from folder_paths import get_output_directory
def get_glb_obj_from_response(response_objs: list[ResultFile3D]) -> ResultFile3D:
def get_file_from_response(response_objs: list[ResultFile3D], file_type: str) -> ResultFile3D | None:
for i in response_objs:
if i.Type.lower() == "glb":
if i.Type.lower() == file_type.lower():
return i
raise ValueError("No GLB file found in response. Please report this to the developers.")
return None
class TencentTextToModelNode(IO.ComfyNode):
@ -74,7 +71,9 @@ class TencentTextToModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DOBJ.Output(display_name="OBJ"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -124,19 +123,20 @@ class TencentTextToModelNode(IO.ComfyNode):
)
if response.Error:
raise ValueError(f"Task creation failed with code {response.Error.Code}: {response.Error.Message}")
task_id = response.JobId
result = await poll_op(
cls,
ApiEndpoint(path="/proxy/tencent/hunyuan/3d-pro/query", method="POST"),
data=To3DProTaskQueryRequest(JobId=response.JobId),
data=To3DProTaskQueryRequest(JobId=task_id),
response_model=To3DProTaskResultResponse,
status_extractor=lambda r: r.Status,
)
model_file = f"hunyuan_model_{response.JobId}.glb"
await download_url_to_bytesio(
get_glb_obj_from_response(result.ResultFile3Ds).Url,
os.path.join(get_output_directory(), model_file),
glb_result = get_file_from_response(result.ResultFile3Ds, "glb")
obj_result = get_file_from_response(result.ResultFile3Ds, "obj")
file_glb = await download_url_to_file_3d(glb_result.Url, "glb", task_id=task_id) if glb_result else None
return IO.NodeOutput(
file_glb, file_glb, await download_url_to_file_3d(obj_result.Url, "obj", task_id=task_id) if obj_result else None
)
return IO.NodeOutput(model_file)
class TencentImageToModelNode(IO.ComfyNode):
@ -184,7 +184,9 @@ class TencentImageToModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DOBJ.Output(display_name="OBJ"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -269,19 +271,20 @@ class TencentImageToModelNode(IO.ComfyNode):
)
if response.Error:
raise ValueError(f"Task creation failed with code {response.Error.Code}: {response.Error.Message}")
task_id = response.JobId
result = await poll_op(
cls,
ApiEndpoint(path="/proxy/tencent/hunyuan/3d-pro/query", method="POST"),
data=To3DProTaskQueryRequest(JobId=response.JobId),
data=To3DProTaskQueryRequest(JobId=task_id),
response_model=To3DProTaskResultResponse,
status_extractor=lambda r: r.Status,
)
model_file = f"hunyuan_model_{response.JobId}.glb"
await download_url_to_bytesio(
get_glb_obj_from_response(result.ResultFile3Ds).Url,
os.path.join(get_output_directory(), model_file),
glb_result = get_file_from_response(result.ResultFile3Ds, "glb")
obj_result = get_file_from_response(result.ResultFile3Ds, "obj")
file_glb = await download_url_to_file_3d(glb_result.Url, "glb", task_id=task_id) if glb_result else None
return IO.NodeOutput(
file_glb, file_glb, await download_url_to_file_3d(obj_result.Url, "obj", task_id=task_id) if obj_result else None
)
return IO.NodeOutput(model_file)
class TencentHunyuan3DExtension(ComfyExtension):

View File

@ -1,5 +1,3 @@
import os
from typing_extensions import override
from comfy_api.latest import IO, ComfyExtension, Input
@ -20,13 +18,12 @@ from comfy_api_nodes.apis.meshy import (
)
from comfy_api_nodes.util import (
ApiEndpoint,
download_url_to_bytesio,
download_url_to_file_3d,
poll_op,
sync_op,
upload_images_to_comfyapi,
validate_string,
)
from folder_paths import get_output_directory
class MeshyTextToModelNode(IO.ComfyNode):
@ -79,8 +76,10 @@ class MeshyTextToModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MESHY_TASK_ID").Output(display_name="meshy_task_id"),
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -122,16 +121,20 @@ class MeshyTextToModelNode(IO.ComfyNode):
seed=seed,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v2/text-to-3d/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v2/text-to-3d/{task_id}"),
response_model=MeshyModelResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(result.model_urls.glb, os.path.join(get_output_directory(), model_file))
return IO.NodeOutput(model_file, response.result)
return IO.NodeOutput(
f"{task_id}.glb",
task_id,
await download_url_to_file_3d(result.model_urls.glb, "glb", task_id=task_id),
await download_url_to_file_3d(result.model_urls.fbx, "fbx", task_id=task_id),
)
class MeshyRefineNode(IO.ComfyNode):
@ -167,8 +170,10 @@ class MeshyRefineNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MESHY_TASK_ID").Output(display_name="meshy_task_id"),
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -210,16 +215,20 @@ class MeshyRefineNode(IO.ComfyNode):
ai_model=model,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v2/text-to-3d/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v2/text-to-3d/{task_id}"),
response_model=MeshyModelResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(result.model_urls.glb, os.path.join(get_output_directory(), model_file))
return IO.NodeOutput(model_file, response.result)
return IO.NodeOutput(
f"{task_id}.glb",
task_id,
await download_url_to_file_3d(result.model_urls.glb, "glb", task_id=task_id),
await download_url_to_file_3d(result.model_urls.fbx, "fbx", task_id=task_id),
)
class MeshyImageToModelNode(IO.ComfyNode):
@ -303,8 +312,10 @@ class MeshyImageToModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MESHY_TASK_ID").Output(display_name="meshy_task_id"),
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -368,16 +379,20 @@ class MeshyImageToModelNode(IO.ComfyNode):
seed=seed,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/image-to-3d/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/image-to-3d/{task_id}"),
response_model=MeshyModelResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(result.model_urls.glb, os.path.join(get_output_directory(), model_file))
return IO.NodeOutput(model_file, response.result)
return IO.NodeOutput(
f"{task_id}.glb",
task_id,
await download_url_to_file_3d(result.model_urls.glb, "glb", task_id=task_id),
await download_url_to_file_3d(result.model_urls.fbx, "fbx", task_id=task_id),
)
class MeshyMultiImageToModelNode(IO.ComfyNode):
@ -464,8 +479,10 @@ class MeshyMultiImageToModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MESHY_TASK_ID").Output(display_name="meshy_task_id"),
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -531,16 +548,20 @@ class MeshyMultiImageToModelNode(IO.ComfyNode):
seed=seed,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/multi-image-to-3d/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/multi-image-to-3d/{task_id}"),
response_model=MeshyModelResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(result.model_urls.glb, os.path.join(get_output_directory(), model_file))
return IO.NodeOutput(model_file, response.result)
return IO.NodeOutput(
f"{task_id}.glb",
task_id,
await download_url_to_file_3d(result.model_urls.glb, "glb", task_id=task_id),
await download_url_to_file_3d(result.model_urls.fbx, "fbx", task_id=task_id),
)
class MeshyRigModelNode(IO.ComfyNode):
@ -571,8 +592,10 @@ class MeshyRigModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MESHY_RIGGED_TASK_ID").Output(display_name="rig_task_id"),
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -606,18 +629,20 @@ class MeshyRigModelNode(IO.ComfyNode):
texture_image_url=texture_image_url,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/rigging/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/rigging/{task_id}"),
response_model=MeshyRiggedResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(
result.result.rigged_character_glb_url, os.path.join(get_output_directory(), model_file)
return IO.NodeOutput(
f"{task_id}.glb",
task_id,
await download_url_to_file_3d(result.result.rigged_character_glb_url, "glb", task_id=task_id),
await download_url_to_file_3d(result.result.rigged_character_fbx_url, "fbx", task_id=task_id),
)
return IO.NodeOutput(model_file, response.result)
class MeshyAnimateModelNode(IO.ComfyNode):
@ -640,7 +665,9 @@ class MeshyAnimateModelNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -669,16 +696,19 @@ class MeshyAnimateModelNode(IO.ComfyNode):
action_id=action_id,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/animations/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/animations/{task_id}"),
response_model=MeshyAnimationResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(result.result.animation_glb_url, os.path.join(get_output_directory(), model_file))
return IO.NodeOutput(model_file, response.result)
return IO.NodeOutput(
f"{task_id}.glb",
await download_url_to_file_3d(result.result.animation_glb_url, "glb", task_id=task_id),
await download_url_to_file_3d(result.result.animation_fbx_url, "fbx", task_id=task_id),
)
class MeshyTextureNode(IO.ComfyNode):
@ -715,8 +745,10 @@ class MeshyTextureNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MODEL_TASK_ID").Output(display_name="meshy_task_id"),
IO.File3DGLB.Output(display_name="GLB"),
IO.File3DFBX.Output(display_name="FBX"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -760,16 +792,20 @@ class MeshyTextureNode(IO.ComfyNode):
image_style_url=image_style_url,
),
)
task_id = response.result
result = await poll_op(
cls,
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/retexture/{response.result}"),
ApiEndpoint(path=f"/proxy/meshy/openapi/v1/retexture/{task_id}"),
response_model=MeshyModelResult,
status_extractor=lambda r: r.status,
progress_extractor=lambda r: r.progress,
)
model_file = f"meshy_model_{response.result}.glb"
await download_url_to_bytesio(result.model_urls.glb, os.path.join(get_output_directory(), model_file))
return IO.NodeOutput(model_file, response.result)
return IO.NodeOutput(
f"{task_id}.glb",
task_id,
await download_url_to_file_3d(result.model_urls.glb, "glb", task_id=task_id),
await download_url_to_file_3d(result.model_urls.fbx, "fbx", task_id=task_id),
)
class MeshyExtension(ComfyExtension):

View File

@ -10,7 +10,6 @@ import folder_paths as comfy_paths
import os
import logging
import math
from typing import Optional
from io import BytesIO
from typing_extensions import override
from PIL import Image
@ -28,8 +27,9 @@ from comfy_api_nodes.util import (
poll_op,
ApiEndpoint,
download_url_to_bytesio,
download_url_to_file_3d,
)
from comfy_api.latest import ComfyExtension, IO
from comfy_api.latest import ComfyExtension, IO, Types
COMMON_PARAMETERS = [
@ -177,7 +177,7 @@ def check_rodin_status(response: Rodin3DCheckStatusResponse) -> str:
return "DONE"
return "Generating"
def extract_progress(response: Rodin3DCheckStatusResponse) -> Optional[int]:
def extract_progress(response: Rodin3DCheckStatusResponse) -> int | None:
if not response.jobs:
return None
completed_count = sum(1 for job in response.jobs if job.status == JobStatus.Done)
@ -207,17 +207,25 @@ async def get_rodin_download_list(uuid: str, cls: type[IO.ComfyNode]) -> Rodin3D
)
async def download_files(url_list, task_uuid: str):
async def download_files(url_list, task_uuid: str) -> tuple[str | None, Types.File3D | None]:
result_folder_name = f"Rodin3D_{task_uuid}"
save_path = os.path.join(comfy_paths.get_output_directory(), result_folder_name)
os.makedirs(save_path, exist_ok=True)
model_file_path = None
file_3d = None
for i in url_list.list:
file_path = os.path.join(save_path, i.name)
if file_path.endswith(".glb"):
if i.name.lower().endswith(".glb"):
model_file_path = os.path.join(result_folder_name, i.name)
await download_url_to_bytesio(i.url, file_path)
return model_file_path
file_3d = await download_url_to_file_3d(i.url, "glb")
# Save to disk for backward compatibility
with open(file_path, "wb") as f:
f.write(file_3d.get_bytes())
else:
await download_url_to_bytesio(i.url, file_path)
return model_file_path, file_3d
class Rodin3D_Regular(IO.ComfyNode):
@ -234,7 +242,10 @@ class Rodin3D_Regular(IO.ComfyNode):
IO.Image.Input("Images"),
*COMMON_PARAMETERS,
],
outputs=[IO.String.Output(display_name="3D Model Path")],
outputs=[
IO.String.Output(display_name="3D Model Path"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
@ -271,9 +282,9 @@ class Rodin3D_Regular(IO.ComfyNode):
)
await poll_for_task_status(subscription_key, cls)
download_list = await get_rodin_download_list(task_uuid, cls)
model = await download_files(download_list, task_uuid)
model_path, file_3d = await download_files(download_list, task_uuid)
return IO.NodeOutput(model)
return IO.NodeOutput(model_path, file_3d)
class Rodin3D_Detail(IO.ComfyNode):
@ -290,7 +301,10 @@ class Rodin3D_Detail(IO.ComfyNode):
IO.Image.Input("Images"),
*COMMON_PARAMETERS,
],
outputs=[IO.String.Output(display_name="3D Model Path")],
outputs=[
IO.String.Output(display_name="3D Model Path"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
@ -327,9 +341,9 @@ class Rodin3D_Detail(IO.ComfyNode):
)
await poll_for_task_status(subscription_key, cls)
download_list = await get_rodin_download_list(task_uuid, cls)
model = await download_files(download_list, task_uuid)
model_path, file_3d = await download_files(download_list, task_uuid)
return IO.NodeOutput(model)
return IO.NodeOutput(model_path, file_3d)
class Rodin3D_Smooth(IO.ComfyNode):
@ -346,7 +360,10 @@ class Rodin3D_Smooth(IO.ComfyNode):
IO.Image.Input("Images"),
*COMMON_PARAMETERS,
],
outputs=[IO.String.Output(display_name="3D Model Path")],
outputs=[
IO.String.Output(display_name="3D Model Path"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
@ -382,9 +399,9 @@ class Rodin3D_Smooth(IO.ComfyNode):
)
await poll_for_task_status(subscription_key, cls)
download_list = await get_rodin_download_list(task_uuid, cls)
model = await download_files(download_list, task_uuid)
model_path, file_3d = await download_files(download_list, task_uuid)
return IO.NodeOutput(model)
return IO.NodeOutput(model_path, file_3d)
class Rodin3D_Sketch(IO.ComfyNode):
@ -408,7 +425,10 @@ class Rodin3D_Sketch(IO.ComfyNode):
optional=True,
),
],
outputs=[IO.String.Output(display_name="3D Model Path")],
outputs=[
IO.String.Output(display_name="3D Model Path"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
@ -441,9 +461,9 @@ class Rodin3D_Sketch(IO.ComfyNode):
)
await poll_for_task_status(subscription_key, cls)
download_list = await get_rodin_download_list(task_uuid, cls)
model = await download_files(download_list, task_uuid)
model_path, file_3d = await download_files(download_list, task_uuid)
return IO.NodeOutput(model)
return IO.NodeOutput(model_path, file_3d)
class Rodin3D_Gen2(IO.ComfyNode):
@ -475,7 +495,10 @@ class Rodin3D_Gen2(IO.ComfyNode):
),
IO.Boolean.Input("TAPose", default=False),
],
outputs=[IO.String.Output(display_name="3D Model Path")],
outputs=[
IO.String.Output(display_name="3D Model Path"), # for backward compatibility only
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
IO.Hidden.api_key_comfy_org,
@ -511,9 +534,9 @@ class Rodin3D_Gen2(IO.ComfyNode):
)
await poll_for_task_status(subscription_key, cls)
download_list = await get_rodin_download_list(task_uuid, cls)
model = await download_files(download_list, task_uuid)
model_path, file_3d = await download_files(download_list, task_uuid)
return IO.NodeOutput(model)
return IO.NodeOutput(model_path, file_3d)
class Rodin3DExtension(ComfyExtension):

View File

@ -1,10 +1,6 @@
import os
from typing import Optional
import torch
from typing_extensions import override
from comfy_api.latest import IO, ComfyExtension
from comfy_api.latest import IO, ComfyExtension, Input
from comfy_api_nodes.apis.tripo import (
TripoAnimateRetargetRequest,
TripoAnimateRigRequest,
@ -26,12 +22,11 @@ from comfy_api_nodes.apis.tripo import (
)
from comfy_api_nodes.util import (
ApiEndpoint,
download_url_as_bytesio,
download_url_to_file_3d,
poll_op,
sync_op,
upload_images_to_comfyapi,
)
from folder_paths import get_output_directory
def get_model_url_from_response(response: TripoTaskResponse) -> str:
@ -45,7 +40,7 @@ def get_model_url_from_response(response: TripoTaskResponse) -> str:
async def poll_until_finished(
node_cls: type[IO.ComfyNode],
response: TripoTaskResponse,
average_duration: Optional[int] = None,
average_duration: int | None = None,
) -> IO.NodeOutput:
"""Polls the Tripo API endpoint until the task reaches a terminal state, then returns the response."""
if response.code != 0:
@ -69,12 +64,8 @@ async def poll_until_finished(
)
if response_poll.data.status == TripoTaskStatus.SUCCESS:
url = get_model_url_from_response(response_poll)
bytesio = await download_url_as_bytesio(url)
# Save the downloaded model file
model_file = f"tripo_model_{task_id}.glb"
with open(os.path.join(get_output_directory(), model_file), "wb") as f:
f.write(bytesio.getvalue())
return IO.NodeOutput(model_file, task_id)
file_glb = await download_url_to_file_3d(url, "glb", task_id=task_id)
return IO.NodeOutput(f"{task_id}.glb", task_id, file_glb)
raise RuntimeError(f"Failed to generate mesh: {response_poll}")
@ -107,8 +98,9 @@ class TripoTextToModelNode(IO.ComfyNode):
IO.Combo.Input("geometry_quality", default="standard", options=["standard", "detailed"], optional=True),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MODEL_TASK_ID").Output(display_name="model task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -155,18 +147,18 @@ class TripoTextToModelNode(IO.ComfyNode):
async def execute(
cls,
prompt: str,
negative_prompt: Optional[str] = None,
negative_prompt: str | None = None,
model_version=None,
style: Optional[str] = None,
texture: Optional[bool] = None,
pbr: Optional[bool] = None,
image_seed: Optional[int] = None,
model_seed: Optional[int] = None,
texture_seed: Optional[int] = None,
texture_quality: Optional[str] = None,
geometry_quality: Optional[str] = None,
face_limit: Optional[int] = None,
quad: Optional[bool] = None,
style: str | None = None,
texture: bool | None = None,
pbr: bool | None = None,
image_seed: int | None = None,
model_seed: int | None = None,
texture_seed: int | None = None,
texture_quality: str | None = None,
geometry_quality: str | None = None,
face_limit: int | None = None,
quad: bool | None = None,
) -> IO.NodeOutput:
style_enum = None if style == "None" else style
if not prompt:
@ -232,8 +224,9 @@ class TripoImageToModelNode(IO.ComfyNode):
IO.Combo.Input("geometry_quality", default="standard", options=["standard", "detailed"], optional=True),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MODEL_TASK_ID").Output(display_name="model task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -279,19 +272,19 @@ class TripoImageToModelNode(IO.ComfyNode):
@classmethod
async def execute(
cls,
image: torch.Tensor,
model_version: Optional[str] = None,
style: Optional[str] = None,
texture: Optional[bool] = None,
pbr: Optional[bool] = None,
model_seed: Optional[int] = None,
image: Input.Image,
model_version: str | None = None,
style: str | None = None,
texture: bool | None = None,
pbr: bool | None = None,
model_seed: int | None = None,
orientation=None,
texture_seed: Optional[int] = None,
texture_quality: Optional[str] = None,
geometry_quality: Optional[str] = None,
texture_alignment: Optional[str] = None,
face_limit: Optional[int] = None,
quad: Optional[bool] = None,
texture_seed: int | None = None,
texture_quality: str | None = None,
geometry_quality: str | None = None,
texture_alignment: str | None = None,
face_limit: int | None = None,
quad: bool | None = None,
) -> IO.NodeOutput:
style_enum = None if style == "None" else style
if image is None:
@ -368,8 +361,9 @@ class TripoMultiviewToModelNode(IO.ComfyNode):
IO.Combo.Input("geometry_quality", default="standard", options=["standard", "detailed"], optional=True),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MODEL_TASK_ID").Output(display_name="model task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -411,21 +405,21 @@ class TripoMultiviewToModelNode(IO.ComfyNode):
@classmethod
async def execute(
cls,
image: torch.Tensor,
image_left: Optional[torch.Tensor] = None,
image_back: Optional[torch.Tensor] = None,
image_right: Optional[torch.Tensor] = None,
model_version: Optional[str] = None,
orientation: Optional[str] = None,
texture: Optional[bool] = None,
pbr: Optional[bool] = None,
model_seed: Optional[int] = None,
texture_seed: Optional[int] = None,
texture_quality: Optional[str] = None,
geometry_quality: Optional[str] = None,
texture_alignment: Optional[str] = None,
face_limit: Optional[int] = None,
quad: Optional[bool] = None,
image: Input.Image,
image_left: Input.Image | None = None,
image_back: Input.Image | None = None,
image_right: Input.Image | None = None,
model_version: str | None = None,
orientation: str | None = None,
texture: bool | None = None,
pbr: bool | None = None,
model_seed: int | None = None,
texture_seed: int | None = None,
texture_quality: str | None = None,
geometry_quality: str | None = None,
texture_alignment: str | None = None,
face_limit: int | None = None,
quad: bool | None = None,
) -> IO.NodeOutput:
if image is None:
raise RuntimeError("front image for multiview is required")
@ -487,8 +481,9 @@ class TripoTextureNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MODEL_TASK_ID").Output(display_name="model task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -512,11 +507,11 @@ class TripoTextureNode(IO.ComfyNode):
async def execute(
cls,
model_task_id,
texture: Optional[bool] = None,
pbr: Optional[bool] = None,
texture_seed: Optional[int] = None,
texture_quality: Optional[str] = None,
texture_alignment: Optional[str] = None,
texture: bool | None = None,
pbr: bool | None = None,
texture_seed: int | None = None,
texture_quality: str | None = None,
texture_alignment: str | None = None,
) -> IO.NodeOutput:
response = await sync_op(
cls,
@ -547,8 +542,9 @@ class TripoRefineNode(IO.ComfyNode):
IO.Custom("MODEL_TASK_ID").Input("model_task_id", tooltip="Must be a v1.4 Tripo model"),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("MODEL_TASK_ID").Output(display_name="model task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -583,8 +579,9 @@ class TripoRigNode(IO.ComfyNode):
category="api node/3d/Tripo",
inputs=[IO.Custom("MODEL_TASK_ID").Input("original_model_task_id")],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("RIG_TASK_ID").Output(display_name="rig task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,
@ -642,8 +639,9 @@ class TripoRetargetNode(IO.ComfyNode):
),
],
outputs=[
IO.String.Output(display_name="model_file"),
IO.String.Output(display_name="model_file"), # for backward compatibility only
IO.Custom("RETARGET_TASK_ID").Output(display_name="retarget task_id"),
IO.File3DGLB.Output(display_name="GLB"),
],
hidden=[
IO.Hidden.auth_token_comfy_org,

View File

@ -28,6 +28,7 @@ from .conversions import (
from .download_helpers import (
download_url_as_bytesio,
download_url_to_bytesio,
download_url_to_file_3d,
download_url_to_image_tensor,
download_url_to_video_output,
)
@ -69,6 +70,7 @@ __all__ = [
# Download helpers
"download_url_as_bytesio",
"download_url_to_bytesio",
"download_url_to_file_3d",
"download_url_to_image_tensor",
"download_url_to_video_output",
# Conversions

View File

@ -11,7 +11,8 @@ import torch
from aiohttp.client_exceptions import ClientError, ContentTypeError
from comfy_api.latest import IO as COMFY_IO
from comfy_api.latest import InputImpl
from comfy_api.latest import InputImpl, Types
from folder_paths import get_output_directory
from . import request_logger
from ._helpers import (
@ -261,3 +262,38 @@ def _generate_operation_id(method: str, url: str, attempt: int) -> str:
except Exception:
slug = "download"
return f"{method}_{slug}_try{attempt}_{uuid.uuid4().hex[:8]}"
async def download_url_to_file_3d(
url: str,
file_format: str,
*,
task_id: str | None = None,
timeout: float | None = None,
max_retries: int = 5,
cls: type[COMFY_IO.ComfyNode] = None,
) -> Types.File3D:
"""Downloads a 3D model file from a URL into memory as BytesIO.
If task_id is provided, also writes the file to disk in the output directory
for backward compatibility with the old save-to-disk behavior.
"""
file_format = file_format.lstrip(".").lower()
data = BytesIO()
await download_url_to_bytesio(
url,
data,
timeout=timeout,
max_retries=max_retries,
cls=cls,
)
if task_id is not None:
# This is only for backward compatability with current behavior when every 3D node is output node
# All new API nodes should not use "task_id" and instead users should use "SaveGLB" node to save results
output_dir = Path(get_output_directory())
output_path = output_dir / f"{task_id}.{file_format}"
output_path.write_bytes(data.getvalue())
data.seek(0)
return Types.File3D(source=data, file_format=file_format)

View File

@ -94,7 +94,7 @@ async def upload_image_to_comfyapi(
*,
mime_type: str | None = None,
wait_label: str | None = "Uploading",
total_pixels: int = 2048 * 2048,
total_pixels: int | None = 2048 * 2048,
) -> str:
"""Uploads a single image to ComfyUI API and returns its download URL."""
return (

View File

@ -28,12 +28,39 @@ class TextEncodeAceStepAudio(io.ComfyNode):
conditioning = node_helpers.conditioning_set_values(conditioning, {"lyrics_strength": lyrics_strength})
return io.NodeOutput(conditioning)
class TextEncodeAceStepAudio15(io.ComfyNode):
@classmethod
def define_schema(cls):
return io.Schema(
node_id="TextEncodeAceStepAudio1.5",
category="conditioning",
inputs=[
io.Clip.Input("clip"),
io.String.Input("tags", multiline=True, dynamic_prompts=True),
io.String.Input("lyrics", multiline=True, dynamic_prompts=True),
io.Int.Input("seed", default=0, min=0, max=0xffffffffffffffff, control_after_generate=True),
io.Int.Input("bpm", default=120, min=10, max=300),
io.Float.Input("duration", default=120.0, min=0.0, max=2000.0, step=0.1),
io.Combo.Input("timesignature", options=['2', '3', '4', '6']),
io.Combo.Input("language", options=["en", "ja", "zh", "es", "de", "fr", "pt", "ru", "it", "nl", "pl", "tr", "vi", "cs", "fa", "id", "ko", "uk", "hu", "ar", "sv", "ro", "el"]),
io.Combo.Input("keyscale", options=[f"{root} {quality}" for quality in ["major", "minor"] for root in ["C", "C#", "Db", "D", "D#", "Eb", "E", "F", "F#", "Gb", "G", "G#", "Ab", "A", "A#", "Bb", "B"]]),
],
outputs=[io.Conditioning.Output()],
)
@classmethod
def execute(cls, clip, tags, lyrics, seed, bpm, duration, timesignature, language, keyscale) -> io.NodeOutput:
tokens = clip.tokenize(tags, lyrics=lyrics, bpm=bpm, duration=duration, timesignature=int(timesignature), language=language, keyscale=keyscale, seed=seed)
conditioning = clip.encode_from_tokens_scheduled(tokens)
return io.NodeOutput(conditioning)
class EmptyAceStepLatentAudio(io.ComfyNode):
@classmethod
def define_schema(cls):
return io.Schema(
node_id="EmptyAceStepLatentAudio",
display_name="Empty Ace Step 1.0 Latent Audio",
category="latent/audio",
inputs=[
io.Float.Input("seconds", default=120.0, min=1.0, max=1000.0, step=0.1),
@ -51,12 +78,60 @@ class EmptyAceStepLatentAudio(io.ComfyNode):
return io.NodeOutput({"samples": latent, "type": "audio"})
class EmptyAceStep15LatentAudio(io.ComfyNode):
@classmethod
def define_schema(cls):
return io.Schema(
node_id="EmptyAceStep1.5LatentAudio",
display_name="Empty Ace Step 1.5 Latent Audio",
category="latent/audio",
inputs=[
io.Float.Input("seconds", default=120.0, min=1.0, max=1000.0, step=0.01),
io.Int.Input(
"batch_size", default=1, min=1, max=4096, tooltip="The number of latent images in the batch."
),
],
outputs=[io.Latent.Output()],
)
@classmethod
def execute(cls, seconds, batch_size) -> io.NodeOutput:
length = round((seconds * 48000 / 1920))
latent = torch.zeros([batch_size, 64, length], device=comfy.model_management.intermediate_device())
return io.NodeOutput({"samples": latent, "type": "audio"})
class ReferenceTimbreAudio(io.ComfyNode):
@classmethod
def define_schema(cls):
return io.Schema(
node_id="ReferenceTimbreAudio",
category="advanced/conditioning/audio",
is_experimental=True,
description="This node sets the reference audio for timbre (for ace step 1.5)",
inputs=[
io.Conditioning.Input("conditioning"),
io.Latent.Input("latent", optional=True),
],
outputs=[
io.Conditioning.Output(),
]
)
@classmethod
def execute(cls, conditioning, latent=None) -> io.NodeOutput:
if latent is not None:
conditioning = node_helpers.conditioning_set_values(conditioning, {"reference_audio_timbre_latents": [latent["samples"]]}, append=True)
return io.NodeOutput(conditioning)
class AceExtension(ComfyExtension):
@override
async def get_node_list(self) -> list[type[io.ComfyNode]]:
return [
TextEncodeAceStepAudio,
EmptyAceStepLatentAudio,
TextEncodeAceStepAudio15,
EmptyAceStep15LatentAudio,
ReferenceTimbreAudio,
]
async def comfy_entrypoint() -> AceExtension:

View File

@ -82,13 +82,14 @@ class VAEEncodeAudio(IO.ComfyNode):
@classmethod
def execute(cls, vae, audio) -> IO.NodeOutput:
sample_rate = audio["sample_rate"]
if 44100 != sample_rate:
waveform = torchaudio.functional.resample(audio["waveform"], sample_rate, 44100)
vae_sample_rate = getattr(vae, "audio_sample_rate", 44100)
if vae_sample_rate != sample_rate:
waveform = torchaudio.functional.resample(audio["waveform"], sample_rate, vae_sample_rate)
else:
waveform = audio["waveform"]
t = vae.encode(waveform.movedim(1, -1))
return IO.NodeOutput({"samples":t})
return IO.NodeOutput({"samples": t})
encode = execute # TODO: remove
@ -114,7 +115,8 @@ class VAEDecodeAudio(IO.ComfyNode):
std = torch.std(audio, dim=[1,2], keepdim=True) * 5.0
std[std < 1.0] = 1.0
audio /= std
return IO.NodeOutput({"waveform": audio, "sample_rate": 44100 if "sample_rate" not in samples else samples["sample_rate"]})
vae_sample_rate = getattr(vae, "audio_sample_rate", 44100)
return IO.NodeOutput({"waveform": audio, "sample_rate": vae_sample_rate if "sample_rate" not in samples else samples["sample_rate"]})
decode = execute # TODO: remove

View File

@ -622,14 +622,20 @@ class SaveGLB(IO.ComfyNode):
category="3d",
is_output_node=True,
inputs=[
IO.Mesh.Input("mesh"),
IO.MultiType.Input(
IO.Mesh.Input("mesh"),
types=[
IO.File3DGLB,
],
tooltip="Mesh or GLB file to save",
),
IO.String.Input("filename_prefix", default="mesh/ComfyUI"),
],
hidden=[IO.Hidden.prompt, IO.Hidden.extra_pnginfo]
)
@classmethod
def execute(cls, mesh, filename_prefix) -> IO.NodeOutput:
def execute(cls, mesh: Types.MESH | Types.File3D, filename_prefix: str) -> IO.NodeOutput:
full_output_folder, filename, counter, subfolder, filename_prefix = folder_paths.get_save_image_path(filename_prefix, folder_paths.get_output_directory())
results = []
@ -641,15 +647,26 @@ class SaveGLB(IO.ComfyNode):
for x in cls.hidden.extra_pnginfo:
metadata[x] = json.dumps(cls.hidden.extra_pnginfo[x])
for i in range(mesh.vertices.shape[0]):
if isinstance(mesh, Types.File3D):
# Handle File3D input - save BytesIO data to output folder
f = f"{filename}_{counter:05}_.glb"
save_glb(mesh.vertices[i], mesh.faces[i], os.path.join(full_output_folder, f), metadata)
mesh.save_to(os.path.join(full_output_folder, f))
results.append({
"filename": f,
"subfolder": subfolder,
"type": "output"
})
counter += 1
else:
# Handle Mesh input - save vertices and faces as GLB
for i in range(mesh.vertices.shape[0]):
f = f"{filename}_{counter:05}_.glb"
save_glb(mesh.vertices[i], mesh.faces[i], os.path.join(full_output_folder, f), metadata)
results.append({
"filename": f,
"subfolder": subfolder,
"type": "output"
})
counter += 1
return IO.NodeOutput(ui={"3d": results})

View File

@ -1,9 +1,10 @@
import nodes
import folder_paths
import os
import uuid
from typing_extensions import override
from comfy_api.latest import IO, ComfyExtension, InputImpl, UI
from comfy_api.latest import IO, UI, ComfyExtension, InputImpl, Types
from pathlib import Path
@ -81,7 +82,19 @@ class Preview3D(IO.ComfyNode):
is_experimental=True,
is_output_node=True,
inputs=[
IO.String.Input("model_file", default="", multiline=False),
IO.MultiType.Input(
IO.String.Input("model_file", default="", multiline=False),
types=[
IO.File3DGLB,
IO.File3DGLTF,
IO.File3DFBX,
IO.File3DOBJ,
IO.File3DSTL,
IO.File3DUSDZ,
IO.File3DAny,
],
tooltip="3D model file or path string",
),
IO.Load3DCamera.Input("camera_info", optional=True),
IO.Image.Input("bg_image", optional=True),
],
@ -89,10 +102,15 @@ class Preview3D(IO.ComfyNode):
)
@classmethod
def execute(cls, model_file, **kwargs) -> IO.NodeOutput:
def execute(cls, model_file: str | Types.File3D, **kwargs) -> IO.NodeOutput:
if isinstance(model_file, Types.File3D):
filename = f"preview3d_{uuid.uuid4().hex}.{model_file.format}"
model_file.save_to(os.path.join(folder_paths.get_output_directory(), filename))
else:
filename = model_file
camera_info = kwargs.get("camera_info", None)
bg_image = kwargs.get("bg_image", None)
return IO.NodeOutput(ui=UI.PreviewUI3D(model_file, camera_info, bg_image=bg_image))
return IO.NodeOutput(ui=UI.PreviewUI3D(filename, camera_info, bg_image=bg_image))
process = execute # TODO: remove

View File

@ -1,3 +1,3 @@
# This file is automatically generated by the build process when version is
# updated in pyproject.toml.
__version__ = "0.11.1"
__version__ = "0.12.1"

View File

@ -10,6 +10,7 @@ from app.logger import setup_logger
from app.assets.scanner import seed_assets
import itertools
import utils.extra_config
import utils.landlock
import logging
import sys
from comfy_execution.progress import get_progress_state
@ -370,6 +371,13 @@ def start_comfyui(asyncio_loop=None):
folder_paths.set_temp_directory(temp_dir)
cleanup_temp()
if args.enable_landlock:
logging.info("Enabling Landlock")
landlock_ok = utils.landlock.enable_landlock(args, logging.getLogger("landlock"))
if not landlock_ok:
logging.critical("Requested Landlock sandbox but it could not be enabled. Exiting.")
sys.exit(1)
if args.windows_standalone_build:
try:
import new_updater

View File

@ -1001,7 +1001,7 @@ class DualCLIPLoader:
def INPUT_TYPES(s):
return {"required": { "clip_name1": (folder_paths.get_filename_list("text_encoders"), ),
"clip_name2": (folder_paths.get_filename_list("text_encoders"), ),
"type": (["sdxl", "sd3", "flux", "hunyuan_video", "hidream", "hunyuan_image", "hunyuan_video_15", "kandinsky5", "kandinsky5_image", "ltxv", "newbie"], ),
"type": (["sdxl", "sd3", "flux", "hunyuan_video", "hidream", "hunyuan_image", "hunyuan_video_15", "kandinsky5", "kandinsky5_image", "ltxv", "newbie", "ace"], ),
},
"optional": {
"device": (["default", "cpu"], {"advanced": True}),

View File

@ -1,6 +1,6 @@
[project]
name = "ComfyUI"
version = "0.11.1"
version = "0.12.1"
readme = "README.md"
license = { file = "LICENSE" }
requires-python = ">=3.10"

View File

@ -1,5 +1,5 @@
comfyui-frontend-package==1.37.11
comfyui-workflow-templates==0.8.27
comfyui-workflow-templates==0.8.31
comfyui-embedded-docs==0.4.0
torch
torchsde

332
utils/landlock.py Normal file
View File

@ -0,0 +1,332 @@
import ctypes
import errno
import logging
import os
import sys
import tempfile
from dataclasses import dataclass
# Landlock constants copied from linux/landlock.h
PR_SET_NO_NEW_PRIVS = 38
LANDLOCK_RULE_PATH_BENEATH = 1
LANDLOCK_CREATE_RULESET_VERSION = 1
LANDLOCK_ACCESS_FS_EXECUTE = 1 << 0
LANDLOCK_ACCESS_FS_WRITE_FILE = 1 << 1
LANDLOCK_ACCESS_FS_READ_FILE = 1 << 2
LANDLOCK_ACCESS_FS_READ_DIR = 1 << 3
LANDLOCK_ACCESS_FS_REMOVE_DIR = 1 << 4
LANDLOCK_ACCESS_FS_REMOVE_FILE = 1 << 5
LANDLOCK_ACCESS_FS_MAKE_CHAR = 1 << 6
LANDLOCK_ACCESS_FS_MAKE_DIR = 1 << 7
LANDLOCK_ACCESS_FS_MAKE_REG = 1 << 8
LANDLOCK_ACCESS_FS_MAKE_SOCK = 1 << 9
LANDLOCK_ACCESS_FS_MAKE_FIFO = 1 << 10
LANDLOCK_ACCESS_FS_MAKE_BLOCK = 1 << 11
LANDLOCK_ACCESS_FS_MAKE_SYM = 1 << 12
LANDLOCK_ACCESS_FS_REFER = 1 << 13
LANDLOCK_ACCESS_FS_TRUNCATE = 1 << 14
LANDLOCK_ACCESS_FS_IOCTL_DEV = 1 << 15 # ABI v5+
# Pre-computed access masks
FS_READ_ACCESS = (
LANDLOCK_ACCESS_FS_READ_FILE
| LANDLOCK_ACCESS_FS_READ_DIR
| LANDLOCK_ACCESS_FS_EXECUTE
)
FS_WRITE_ACCESS = (
FS_READ_ACCESS
| LANDLOCK_ACCESS_FS_WRITE_FILE
| LANDLOCK_ACCESS_FS_MAKE_DIR
| LANDLOCK_ACCESS_FS_MAKE_REG
| LANDLOCK_ACCESS_FS_MAKE_SOCK
| LANDLOCK_ACCESS_FS_MAKE_FIFO
| LANDLOCK_ACCESS_FS_MAKE_BLOCK
| LANDLOCK_ACCESS_FS_MAKE_CHAR
| LANDLOCK_ACCESS_FS_MAKE_SYM
| LANDLOCK_ACCESS_FS_REMOVE_DIR
| LANDLOCK_ACCESS_FS_REMOVE_FILE
)
# Syscall numbers are ABI-stable across all 64-bit Linux architectures
SYS_LANDLOCK_CREATE_RULESET = 444
SYS_LANDLOCK_ADD_RULE = 445
SYS_LANDLOCK_RESTRICT_SELF = 446
class _RulesetAttr(ctypes.Structure):
_fields_ = [("handled_access_fs", ctypes.c_uint64)]
class _PathBeneathAttr(ctypes.Structure):
_fields_ = [
("allowed_access", ctypes.c_uint64),
("parent_fd", ctypes.c_int32),
("reserved", ctypes.c_uint32),
]
@dataclass(frozen=True)
class LandlockRules:
read_paths: set[str]
write_paths: set[str]
ioctl_paths: set[str]
def _normalize_paths(paths: set[str]) -> set[str]:
normalized = set()
for path in paths:
if not path:
continue
normalized.add(os.path.realpath(path))
return normalized
class LandlockEnforcer:
def __init__(self, logger: logging.Logger | None = None):
self.log = logger or logging.getLogger(__name__)
self.libc = ctypes.CDLL(None, use_errno=True)
self.libc.syscall.restype = ctypes.c_long
self.libc.prctl.restype = ctypes.c_int
def _syscall(self, syscall_nr, *args) -> tuple[int | None, int]:
ctypes.set_errno(0)
res = self.libc.syscall(ctypes.c_long(syscall_nr), *args)
if res == -1:
return None, ctypes.get_errno()
return res, 0
def _abi_version(self) -> int:
res, err = self._syscall(
SYS_LANDLOCK_CREATE_RULESET,
ctypes.c_void_p(0),
ctypes.c_size_t(0),
ctypes.c_uint(LANDLOCK_CREATE_RULESET_VERSION),
)
if res is None:
if err in (errno.ENOSYS, errno.EOPNOTSUPP):
return 0
return -err
return res
def _create_ruleset(self, handled_access: int) -> tuple[int | None, int]:
ruleset = _RulesetAttr(ctypes.c_uint64(handled_access))
return self._syscall(
SYS_LANDLOCK_CREATE_RULESET,
ctypes.byref(ruleset),
ctypes.c_size_t(ctypes.sizeof(ruleset)),
ctypes.c_uint(0),
)
def _add_rule(self, ruleset_fd: int, path: str, access_mask: int, allow_ioctl: bool) -> bool:
if allow_ioctl:
access_mask |= LANDLOCK_ACCESS_FS_IOCTL_DEV
try:
dir_fd = os.open(path, os.O_PATH | os.O_CLOEXEC)
except OSError as exc:
self.log.warning("Landlock: skipping %s (%s)", path, exc)
return False
try:
rule = _PathBeneathAttr(
ctypes.c_uint64(access_mask), ctypes.c_int32(dir_fd), ctypes.c_uint32(0)
)
res, err = self._syscall(
SYS_LANDLOCK_ADD_RULE,
ctypes.c_int(ruleset_fd),
ctypes.c_int(LANDLOCK_RULE_PATH_BENEATH),
ctypes.byref(rule),
ctypes.c_uint(0),
)
if res is None:
self.log.warning("Landlock: failed to add %s (errno=%s)", path, err)
return False
return True
finally:
os.close(dir_fd)
def _restrict_self(self, ruleset_fd: int) -> bool:
if self.libc.prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) != 0:
self.log.warning(
"Landlock: prctl(PR_SET_NO_NEW_PRIVS) failed (errno=%s)",
ctypes.get_errno(),
)
return False
res, err = self._syscall(SYS_LANDLOCK_RESTRICT_SELF, ctypes.c_int(ruleset_fd), ctypes.c_uint(0))
if res is None:
self.log.warning("Landlock: restrict_self failed (errno=%s)", err)
return False
return True
def apply(self, rules: LandlockRules) -> bool:
if not sys.platform.startswith("linux"):
self.log.info("Landlock: not a Linux platform, skipping.")
return False
abi_version = self._abi_version()
if abi_version <= 0:
self.log.info("Landlock: not available on this kernel (abi=%s).", abi_version)
return False
read_access = FS_READ_ACCESS
handled_write_access = FS_WRITE_ACCESS
allowed_write_access = (
read_access
| LANDLOCK_ACCESS_FS_WRITE_FILE
| LANDLOCK_ACCESS_FS_MAKE_DIR
| LANDLOCK_ACCESS_FS_MAKE_REG
| LANDLOCK_ACCESS_FS_REMOVE_DIR
| LANDLOCK_ACCESS_FS_REMOVE_FILE
) # leave other handled rights (symlinks, device nodes, sockets) denied
if abi_version >= 2:
handled_write_access |= LANDLOCK_ACCESS_FS_TRUNCATE
allowed_write_access |= LANDLOCK_ACCESS_FS_TRUNCATE
if abi_version >= 3:
handled_write_access |= LANDLOCK_ACCESS_FS_REFER
allowed_write_access |= LANDLOCK_ACCESS_FS_REFER
handled_access = handled_write_access | read_access
ioctl_supported = abi_version >= 5
if ioctl_supported:
handled_access |= LANDLOCK_ACCESS_FS_IOCTL_DEV
write_paths = _normalize_paths(rules.write_paths)
read_paths = _normalize_paths(rules.read_paths) - write_paths
# In theory these could require write or read access. Though in practice it's just /dev.
ioctl_paths = _normalize_paths(rules.ioctl_paths)
if ioctl_paths and not ioctl_supported:
self.log.info(
"Landlock: ioctl access requested but ABI %s has no support; continuing without ioctl.",
abi_version,
)
ruleset_fd = None
ruleset_fd, err = self._create_ruleset(handled_access)
if ruleset_fd is None:
self.log.warning("Landlock: failed to create ruleset (errno=%s)", err)
return False
try:
for path in write_paths:
if path != os.path.sep:
try:
os.makedirs(path, exist_ok=True)
except Exception as exc:
self.log.warning("Landlock: unable to prepare %s (%s)", path, exc)
return False
if not self._add_rule(
ruleset_fd, path, allowed_write_access, ioctl_supported and path in ioctl_paths
):
return False
for path in read_paths:
self._add_rule(ruleset_fd, path, read_access, ioctl_supported and path in ioctl_paths)
if not self._restrict_self(ruleset_fd):
return False
finally:
if ruleset_fd is not None:
os.close(ruleset_fd)
if write_paths:
self.log.info(
"Landlock enabled (ABI %s). Writable roots: %s",
abi_version,
", ".join(sorted(write_paths)),
)
else:
self.log.info("Landlock enabled (ABI %s). No writable roots configured.", abi_version)
return True
_landlock_applied = False
def build_default_rules(args) -> LandlockRules:
import folder_paths
from urllib.parse import urlparse
write_paths: set[str] = {
folder_paths.get_output_directory(),
folder_paths.get_input_directory(),
folder_paths.get_temp_directory(),
folder_paths.get_user_directory(),
}
ioctl_paths: set[str] = set()
# Torch and some backends use system temp and /dev/shm
write_paths.add(tempfile.gettempdir())
if args.temp_directory:
write_paths.add(os.path.join(os.path.abspath(args.temp_directory), "temp"))
db_url = getattr(args, "database_url", None)
if db_url and db_url.startswith("sqlite"):
parsed = urlparse(db_url)
if parsed.scheme == "sqlite" and parsed.path:
write_paths.add(os.path.abspath(os.path.dirname(parsed.path)))
for path in args.landlock_allow_writable or []:
if path:
write_paths.add(path)
# Build read paths - only what's actually needed
read_paths: set[str] = set()
# ComfyUI codebase
read_paths.add(folder_paths.base_path)
# All configured model directories (includes extra_model_paths.yaml)
for folder_name in folder_paths.folder_names_and_paths:
for path in folder_paths.folder_names_and_paths[folder_name][0]:
read_paths.add(path)
# Python installation and site-packages
read_paths.add(sys.prefix)
if sys.base_prefix != sys.prefix:
read_paths.add(sys.base_prefix)
for path in sys.path:
if path and os.path.isdir(path):
read_paths.add(path)
# System libraries (required for shared libs, CUDA, etc.)
for system_path in ["/usr", "/lib", "/lib64", "/opt", "/etc", "/proc", "/sys"]:
if os.path.exists(system_path):
read_paths.add(system_path)
# NixOS: /nix/store contains the entire system
if os.path.exists("/nix"):
read_paths.add("/nix")
# /dev needs write + ioctl for CUDA/GPU access
write_paths.add("/dev")
ioctl_paths.add("/dev")
# User-specified additional read paths
for path in getattr(args, "landlock_allow_readable", None) or []:
if path:
read_paths.add(path)
return LandlockRules(read_paths=_normalize_paths(read_paths), write_paths=_normalize_paths(write_paths), ioctl_paths=_normalize_paths(ioctl_paths))
def enable_landlock(args, logger: logging.Logger | None = None) -> bool:
global _landlock_applied
if _landlock_applied:
return True
if not getattr(args, "enable_landlock", False):
return False
enforcer = LandlockEnforcer(logger)
try:
_landlock_applied = enforcer.apply(build_default_rules(args))
except Exception:
enforcer.log.exception("Landlock: unexpected failure while applying ruleset.")
_landlock_applied = False
return _landlock_applied