- adding known controlnet models now works better
- comfyui_controlnet_aux sometimes wants torchhub paths with files that are in the form directory/filename.safetensors. This is now supported
- save_with_filename now correctly matches again
- raise on file load failures in the base nodes
- transformers models should load with trust_remote_code False whenever possible
- fix canonicalize_map call for windows-linux interopability
- the model weights generally have to be patched ahead of time for compilation to work
- the model downloader matches the folder_paths API a bit better
- tweak the logging from the execution node
- GGUF now works and included 4 bit quants, this will allow WAN to run on 24GB VRAM GPUs
- logger only shows full stack for errors
- helper functions for colab notebook
- fix nvcr.io auth error
- lora issues are now reported better
- model downloader will use huggingface cache and symlinking if it is supported on your platform
- torch compile node now correctly patches the model before compilation
- add xet support and add the xet cache to manageable directories
- xet is enabled by default
- fix logging to root in various places
- improve logging about model unloading and loading
- TorchCompileNode now supports the VAE
- torchaudio missing will cause less noise in the logs
- feature flags will assume to be supporting everything in the distributed progress context
- fixes progress notifications
- Wan and Cosmos prompt upsamplers
- Fixed torch.compile issues
- Known models added
- Cosmos, Wan and Hunyuan Video resolutions now supported by Fit Image
to Diffusion Size.
- Better error messages for Ampere and Triton interactions
- Cosmos now fully tested
- Preliminary support for essential Cosmos prompt "upsampler"
- Lumina tests
- Tweaks to language and image resizing nodes
- Fix for #31 all the samplers are now present again
- fix#29 str(model) no longer raises exceptions like with
HyVideoModelLoader
- don't try to format CUDA tensors because that can sometimes raise
exceptions
- cudaAllocAsync has been disabled for now due to 2.6.0 bugs
- improve florence2 support
- add support for paligemma 2. This requires the fix for transformers
that is currently staged in another repo, install with
`uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs"`
- triton has been updated
- fix missing __init__.py files
- include detailed runtime instructions for Windows and macOS
- include instructions for agreeing to use Hugging Face repositories
- always create directories by default when run interactively
- model downloader now supports multiple folder names for known models
- improve logging in sd.py
- use their logger when running interactively
- move the extra nodes files to where this fork expects them
- add the mochi checkpoints to known models
- add a mochi workflow test
- Experimental support for sage attention on Linux
- Diffusers loader now supports model indices
- Transformers model management now aligns with updates to ComfyUI
- Flux layers correctly use unbind
- Add float8 support for model loading in more places
- Experimental quantization approaches from Quanto and torchao
- Model upscaling interacts with memory management better
This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.