I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
* Less fragile memory management.
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Fix issue.
- Validation errors that occur early in the lifecycle of prompt
execution now get propagated to their callers in the
EmbeddedComfyClient. This includes error messages about missing node
classes.
- The execution context now includes the node_id and the prompt_id
- Latent previews are now sent with a node_id. This is not backwards
compatible with old frontends.
- Dependency execution errors are now modeled correctly.
- Distributed progress encodes image previews with node and prompt IDs.
- Typing for models
- The frontend was updated to use node IDs with previews
- Improvements to torch.compile experiments
- Some controlnet_aux nodes were upstreamed
- Experimental support for sage attention on Linux
- Diffusers loader now supports model indices
- Transformers model management now aligns with updates to ComfyUI
- Flux layers correctly use unbind
- Add float8 support for model loading in more places
- Experimental quantization approaches from Quanto and torchao
- Model upscaling interacts with memory management better
This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.