* Add prompt_id to progress_text binary WS messages
Add supports_progress_text_metadata feature flag and extend
send_progress_text() to accept optional prompt_id param. When
prompt_id is provided and the client supports the new format,
the binary wire format includes a length-prefixed prompt_id field:
[4B event_type][4B prompt_id_len][prompt_id][4B node_id_len][node_id][text]
Legacy format preserved for clients without the flag.
Both callers (nodes_images.py, client.py) updated to pass prompt_id
from get_executing_context().
Part of COM-12671: parallel workflow execution support.
Amp-Thread-ID: https://ampcode.com/threads/T-019c79f7-f19b-70d9-b662-0687cc206282
* refactor: add prompt_id as hidden type, fix imports, add docstrings
- Add PROMPT_ID as a new hidden type in the Hidden enum, HiddenHolder,
HiddenInputTypeDict, and execution engine resolution (both V3 and legacy)
- Refactor GetImageSize to use cls.hidden.prompt_id instead of manually
calling get_executing_context() — addresses reviewer feedback
- Remove lazy import of get_executing_context from nodes_images.py
- Add docstrings to send_progress_text, _display_text, HiddenHolder,
and HiddenHolder.from_dict
Amp-Thread-ID: https://ampcode.com/threads/T-019ca1cb-0150-7549-8b1b-6713060d3408
* fix: send_progress_text unicasts to client_id instead of broadcasting
- Default sid to self.client_id when not explicitly provided, matching
every other WS message dispatch (executing, executed, progress_state, etc.)
- Previously sid=None caused broadcast to all connected clients
- Format signature per ruff, remove redundant comments
- Add unit tests for routing, legacy format, and new prompt_id format
Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e
* remove send_progress_text stub tests
Copy-paste stub tests don't verify the real implementation and add
maintenance burden without meaningful coverage.
Amp-Thread-ID: https://ampcode.com/threads/T-019ca3ce-c530-75dd-8d68-349e745a022e
* fix: always send new binary format when client supports feature flag
When prompt_id is None, encode as zero-length string instead of falling
back to old format. Prevents binary parse corruption on the frontend.
Addresses review feedback:
https://github.com/Comfy-Org/ComfyUI/pull/12540#discussion_r2923412491
---------
Co-authored-by: bymyself <cbyrne@comfy.org>
* Support for async execution functions
This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.
Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.
In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.
* Add the execution model tests to CI
* Add a missing file
It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.
* Add the websocket library for automated tests
* Add additional tests for async error cases
Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.
* Add a feature flags message to reduce bandwidth
We now only send 1 preview message of the latest type the client can
support.
We'll add a console warning when the client fails to send a feature
flags message at some point in the future.
* Add async tests to CI
* Don't actually add new tests in this PR
Will do it in a separate PR
* Resolve unit test in GPU-less runner
* Just remove the tests that GHA can't handle
* Change line endings to UNIX-style
* Avoid loading model_management.py so early
Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.