When the VAE catches this VRAM OOM, it launches the fallback logic
straight from the exception context.
Python however refs the entire call stack that caused the exception
including any local variables for the sake of exception report and
debugging. In the case of tensors, this can hold on the references
to GBs of VRAM and inhibit the VRAM allocated from freeing them.
So dump the except context completely before going back to the VAE
via the tiler by getting out of the except block with nothing but
a flag.
The greately increases the reliability of the tiler fallback,
especially on low VRAM cards, as with the bug, if the leak randomly
leaked more than the headroom needed for a single tile, the tiler
would fallback would OOM and fail the flow.
* Initial Chroma Radiance support
* Minor Chroma Radiance cleanups
* Update Radiance nodes to ensure latents/images are on the intermediate device
* Fix Chroma Radiance memory estimation.
* Increase Chroma Radiance memory usage factor
* Increase Chroma Radiance memory usage factor once again
* Ensure images are multiples of 16 for Chroma Radiance
Add batch dimension and fix channels when necessary in ChromaRadianceImageToLatent node
* Tile Chroma Radiance NeRF to reduce memory consumption, update memory usage factor
* Update Radiance to support conv nerf final head type.
* Allow setting NeRF embedder dtype for Radiance
Bump Radiance nerf tile size to 32
Support EasyCache/LazyCache on Radiance (maybe)
* Add ChromaRadianceStubVAE node
* Crop Radiance image inputs to multiples of 16 instead of erroring to be in line with existing VAE behavior
* Convert Chroma Radiance nodes to V3 schema.
* Add ChromaRadianceOptions node and backend support.
Cleanups/refactoring to reduce code duplication with Chroma.
* Fix overriding the NeRF embedder dtype for Chroma Radiance
* Minor Chroma Radiance cleanups
* Move Chroma Radiance to its own directory in ldm
Minor code cleanups and tooltip improvements
* Fix Chroma Radiance embedder dtype overriding
* Remove Radiance dynamic nerf_embedder dtype override feature
* Unbork Radiance NeRF embedder init
* Remove Chroma Radiance image conversion and stub VAE nodes
Add a chroma_radiance option to the VAELoader builtin node which uses comfy.sd.PixelspaceConversionVAE
Add a PixelspaceConversionVAE to comfy.sd for converting BHWC 0..1 <-> BCHW -1..1
- GGUF now works and included 4 bit quants, this will allow WAN to run on 24GB VRAM GPUs
- logger only shows full stack for errors
- helper functions for colab notebook
- fix nvcr.io auth error
- lora issues are now reported better
- model downloader will use huggingface cache and symlinking if it is supported on your platform
- torch compile node now correctly patches the model before compilation
- add xet support and add the xet cache to manageable directories
- xet is enabled by default
- fix logging to root in various places
- improve logging about model unloading and loading
- TorchCompileNode now supports the VAE
- torchaudio missing will cause less noise in the logs
- feature flags will assume to be supporting everything in the distributed progress context
- fixes progress notifications
* Upload files for Chroma Implementation
* Remove trailing whitespace
* trim more trailing whitespace..oops
* remove unused imports
* Add supported_inference_dtypes
* Set min_length to 0 and remove attention_mask=True
* Set min_length to 1
* get_mdulations added from blepping and minor changes
* Add lora conversion if statement in lora.py
* Update supported_models.py
* update model_base.py
* add uptream commits
* set modelType.FLOW, will cause beta scheduler to work properly
* Adjust memory usage factor and remove unnecessary code
* fix mistake
* reduce code duplication
* remove unused imports
* refactor for upstream sync
* sync chroma-support with upstream via syncbranch patch
* Update sd.py
* Add Chroma as option for the OptimalStepsScheduler node