* flux: Do the xq and xk ropes one at a time
This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.
Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.
This reduces peak VRAM usage for some WAN2.2 inferences (at least).
* wan: Optimize qkv intermediates on attention
As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
* Initial Chroma Radiance support
* Minor Chroma Radiance cleanups
* Update Radiance nodes to ensure latents/images are on the intermediate device
* Fix Chroma Radiance memory estimation.
* Increase Chroma Radiance memory usage factor
* Increase Chroma Radiance memory usage factor once again
* Ensure images are multiples of 16 for Chroma Radiance
Add batch dimension and fix channels when necessary in ChromaRadianceImageToLatent node
* Tile Chroma Radiance NeRF to reduce memory consumption, update memory usage factor
* Update Radiance to support conv nerf final head type.
* Allow setting NeRF embedder dtype for Radiance
Bump Radiance nerf tile size to 32
Support EasyCache/LazyCache on Radiance (maybe)
* Add ChromaRadianceStubVAE node
* Crop Radiance image inputs to multiples of 16 instead of erroring to be in line with existing VAE behavior
* Convert Chroma Radiance nodes to V3 schema.
* Add ChromaRadianceOptions node and backend support.
Cleanups/refactoring to reduce code duplication with Chroma.
* Fix overriding the NeRF embedder dtype for Chroma Radiance
* Minor Chroma Radiance cleanups
* Move Chroma Radiance to its own directory in ldm
Minor code cleanups and tooltip improvements
* Fix Chroma Radiance embedder dtype overriding
* Remove Radiance dynamic nerf_embedder dtype override feature
* Unbork Radiance NeRF embedder init
* Remove Chroma Radiance image conversion and stub VAE nodes
Add a chroma_radiance option to the VAELoader builtin node which uses comfy.sd.PixelspaceConversionVAE
Add a PixelspaceConversionVAE to comfy.sd for converting BHWC 0..1 <-> BCHW -1..1
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options
* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added
* Fix memory usage issue with inspect
* Made WAN attention receive transformer_options, test node added to wan to test out attention override later
* Added **kwargs to all attention functions so transformer_options could potentially be passed through
* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention
* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)
* Make flux work with optimized_attention_override
* Add logs to verify optimized_attention_override is passed all the way into attention function
* Make Qwen work with optimized_attention_override
* Made hidream work with optimized_attention_override
* Made wan patches_replace work with optimized_attention_override
* Made SD3 work with optimized_attention_override
* Made HunyuanVideo work with optimized_attention_override
* Made Mochi work with optimized_attention_override
* Made LTX work with optimized_attention_override
* Made StableAudio work with optimized_attention_override
* Made optimized_attention_override work with ACE Step
* Made Hunyuan3D work with optimized_attention_override
* Make CosmosPredict2 work with optimized_attention_override
* Made CosmosVideo work with optimized_attention_override
* Made Omnigen 2 work with optimized_attention_override
* Made StableCascade work with optimized_attention_override
* Made AuraFlow work with optimized_attention_override
* Made Lumina work with optimized_attention_override
* Made Chroma work with optimized_attention_override
* Made SVD work with optimized_attention_override
* Fix WanI2VCrossAttention so that it expects to receive transformer_options
* Fixed Wan2.1 Fun Camera transformer_options passthrough
* Fixed WAN 2.1 VACE transformer_options passthrough
* Add optimized to get_attention_function
* Disable attention logs for now
* Remove attention logging code
* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case
* Satisfy ruff
* Remove AttentionOverrideTest node, that's something to cook up for later
Load the projector.safetensors file with the ModelPatchLoader node and use
the siglip_vision_patch14_384.safetensors "clip vision" model and the
USOStyleReferenceNode.
* Attempting a universal implementation of EasyCache, starting with flux as test; I screwed up the math a bit, but when I set it just right it works.
* Fixed math to make threshold work as expected, refactored code to use EasyCacheHolder instead of a dict wrapped by object
* Use sigmas from transformer_options instead of timesteps to be compatible with a greater amount of models, make end_percent work
* Make log statement when not skipping useful, preparing for per-cond caching
* Added DIFFUSION_MODEL wrapper around forward function for wan model
* Add subsampling for heuristic inputs
* Add subsampling to output_prev (output_prev_subsampled now)
* Properly consider conds in EasyCache logic
* Created SuperEasyCache to test what happens if caching and reuse is moved outside the scope of conds, added PREDICT_NOISE wrapper to facilitate this test
* Change max reuse_threshold to 3.0
* Mark EasyCache/SuperEasyCache as experimental (beta)
* Make Lumina2 compatible with EasyCache
* Add EasyCache support for Qwen Image
* Fix missing comma, curse you Cursor
* Add EasyCache support to AceStep
* Add EasyCache support to Chroma
* Added EasyCache support to Cosmos Predict t2i
* Make EasyCache not crash with Cosmos Predict ImagToVideo latents, but does not work well at all
* Add EasyCache support to hidream
* Added EasyCache support to hunyuan video
* Added EasyCache support to hunyuan3d
* Added EasyCache support to LTXV (not very good, but does not crash)
* Implemented EasyCache for aura_flow
* Renamed SuperEasyCache to LazyCache, hardcoded subsample_factor to 8 on nodes
* Eatra logging when verbose is true for EasyCache
These are not real controlnets but actually a patch on the model so they
will be treated as such.
Put them in the models/model_patches/ folder.
Use the new ModelPatchLoader and QwenImageDiffsynthControlnet nodes.
This node is only useful if someone trains the kontext model to properly
use multiple reference images via the index method.
The default is the offset method which feeds the multiple images like if
they were stitched together as one. This method works with the current
flux kontext model.
* support wan camera models
* fix by ruff check
* change camera_condition type; make camera_condition optional
* support camera trajectory nodes
* fix camera direction
---------
Co-authored-by: Qirui Sun <sunqr0667@126.com>
* Upload files for Chroma Implementation
* Remove trailing whitespace
* trim more trailing whitespace..oops
* remove unused imports
* Add supported_inference_dtypes
* Set min_length to 0 and remove attention_mask=True
* Set min_length to 1
* get_mdulations added from blepping and minor changes
* Add lora conversion if statement in lora.py
* Update supported_models.py
* update model_base.py
* add uptream commits
* set modelType.FLOW, will cause beta scheduler to work properly
* Adjust memory usage factor and remove unnecessary code
* fix mistake
* reduce code duplication
* remove unused imports
* refactor for upstream sync
* sync chroma-support with upstream via syncbranch patch
* Update sd.py
* Add Chroma as option for the OptimalStepsScheduler node