mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-08 16:22:38 +08:00
* Add prompt_id to progress_text binary WS messages Add supports_progress_text_metadata feature flag and extend send_progress_text() to accept optional prompt_id param. When prompt_id is provided and the client supports the new format, the binary wire format includes a length-prefixed prompt_id field: [4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text] Legacy format preserved for clients without the flag. Both callers (nodes_images.py, client.py) updated to pass prompt_id from get_executing_context(). Part of COM-12671: parallel workflow execution support. Amp-Thread-ID: https://ampcode.com/threads/T-019c79f7-f19b-70d9-b662-0687cc206282 * refactor: add prompt_id as hidden type, fix imports, add docstrings - Add PROMPT_ID as a new hidden type in the Hidden enum, HiddenHolder, HiddenInputTypeDict, and execution engine resolution (both V3 and legacy) - Refactor GetImageSize to use cls.hidden.prompt_id instead of manually calling get_executing_context() — addresses reviewer feedback - Remove lazy import of get_executing_context from nodes_images.py - Add docstrings to send_progress_text, _display_text, HiddenHolder, and HiddenHolder.from_dict Amp-Thread-ID: https://ampcode.com/threads/T-019ca1cb-0150-7549-8b1b-6713060d3408 * fix: send_progress_text unicasts to client_id instead of broadcasting - Default sid to self.client_id when not explicitly provided, matching every other WS message dispatch (executing, executed, progress_state, etc.) - Previously sid=None caused broadcast to all connected clients - Format signature per ruff, remove redundant comments - Add unit tests for routing, legacy format, and new prompt_id format Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e * remove send_progress_text stub tests Copy-paste stub tests don't verify the real implementation and add maintenance burden without meaningful coverage. Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e * fix: always send new binary format when client supports feature flag When prompt_id is None, encode as zero-length string instead of falling back to old format. Prevents binary parse corruption on the frontend. Addresses review feedback: https://github.com/Comfy-Org/ComfyUI/pull/12540#discussion_r2923412491 --------- Co-authored-by: bymyself <cbyrne@comfy.org> |
||
|---|---|---|
| .. | ||
| audio_encoders | ||
| cldm | ||
| comfy_types | ||
| extra_samplers | ||
| image_encoders | ||
| k_diffusion | ||
| ldm | ||
| sd1_tokenizer | ||
| t2i_adapter | ||
| taesd | ||
| text_encoders | ||
| weight_adapter | ||
| cli_args.py | ||
| clip_config_bigg.json | ||
| clip_model.py | ||
| clip_vision_config_g.json | ||
| clip_vision_config_h.json | ||
| clip_vision_config_vitl_336_llava.json | ||
| clip_vision_config_vitl_336.json | ||
| clip_vision_config_vitl.json | ||
| clip_vision_siglip2_base_naflex.json | ||
| clip_vision_siglip_384.json | ||
| clip_vision_siglip_512.json | ||
| clip_vision.py | ||
| conds.py | ||
| context_windows.py | ||
| controlnet.py | ||
| diffusers_convert.py | ||
| diffusers_load.py | ||
| float.py | ||
| gligen.py | ||
| hooks.py | ||
| latent_formats.py | ||
| lora_convert.py | ||
| lora.py | ||
| memory_management.py | ||
| model_base.py | ||
| model_detection.py | ||
| model_management.py | ||
| model_patcher.py | ||
| model_sampling.py | ||
| nested_tensor.py | ||
| ops.py | ||
| options.py | ||
| patcher_extension.py | ||
| pinned_memory.py | ||
| pixel_space_convert.py | ||
| quant_ops.py | ||
| rmsnorm.py | ||
| sample.py | ||
| sampler_helpers.py | ||
| samplers.py | ||
| sd1_clip_config.json | ||
| sd1_clip.py | ||
| sd.py | ||
| sdxl_clip.py | ||
| supported_models_base.py | ||
| supported_models.py | ||
| utils.py | ||
| windows.py | ||