- Cosmos now fully tested
- Preliminary support for essential Cosmos prompt "upsampler"
- Lumina tests
- Tweaks to language and image resizing nodes
- Fix for #31 all the samplers are now present again
- --panics-when=torch.cuda.OutOfMemory will now correctly panic and
exit the worker, giving it time to reply that the execution failed
and better dealing with irrecoverable out-of-memory errors
- --executor-factory=ProcessPoolExecutor will use a process instead of
a thread to execute comfyui workflows when using the worker. When
this process panics and exits, it will be correctly replaced, making
a more robust worker
- export_custom_nodes() finds all the classes that inherit from
CustomNode and exports them correctly for custom node discovery to
find
- regular expressions
- additional string formatting and parsing nodes
- fix#29 str(model) no longer raises exceptions like with
HyVideoModelLoader
- don't try to format CUDA tensors because that can sometimes raise
exceptions
- cudaAllocAsync has been disabled for now due to 2.6.0 bugs
- improve florence2 support
- add support for paligemma 2. This requires the fix for transformers
that is currently staged in another repo, install with
`uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs"`
- triton has been updated
- fix missing __init__.py files
- use their logger when running interactively
- move the extra nodes files to where this fork expects them
- add the mochi checkpoints to known models
- add a mochi workflow test
- Adding the output paths now correctly registers a relative path,
i.e., outputs/loras and models/lorals will now be searched on all
your base paths
- Adding absolute paths with models/ works better
- All the base paths and directories are queried better
- Validation errors that occur early in the lifecycle of prompt
execution now get propagated to their callers in the
EmbeddedComfyClient. This includes error messages about missing node
classes.
- The execution context now includes the node_id and the prompt_id
- Latent previews are now sent with a node_id. This is not backwards
compatible with old frontends.
- Dependency execution errors are now modeled correctly.
- Distributed progress encodes image previews with node and prompt IDs.
- Typing for models
- The frontend was updated to use node IDs with previews
- Improvements to torch.compile experiments
- Some controlnet_aux nodes were upstreamed