* sd: soft_empty_cache on tiler fallback
This doesnt cost a lot and creates the expected VRAM reduction in
resource monitors when you fallback to tiler.
* wan: vae: Don't recursion in local fns (move run_up)
Moved Decoder3d’s recursive run_up out of forward into a class
method to avoid nested closure self-reference cycles. This avoids
cyclic garbage that delays garbage of tensors which in turn delays
VRAM release before tiled fallback.
* ltx: vae: Don't recursion in local fns (move run_up)
Mov the recursive run_up out of forward into a class
method to avoid nested closure self-reference cycles. This avoids
cyclic garbage that delays garbage of tensors which in turn delays
VRAM release before tiled fallback.
* ltx: vae: add cache state to downsample block
* ltx: vae: Add time stride awareness to causal_conv_3d
* ltx: vae: Automate truncation for encoder
Other VAEs just truncate without error. Do the same.
* sd/ltx: Make chunked_io a flag in its own right
Taking this bi-direcitonal, so make it a for-purpose named flag.
* ltx: vae: implement chunked encoder + CPU IO chunking
People are doing things with big frame counts in LTX including V2V
flows. Implement the time-chunked encoder to keep the VRAM down, with
the converse of the new CPU pre-allocation technique, where the chunks
are brought from the CPU JIT.
* ltx: vae-encode: round chunk sizes more strictly
Only powers of 2 and multiple of 8 are valid due to cache slicing.
This is an experimental WIP option that might not work in your workflow but
should lower memory usage if it does.
Currently only the VAE and the load image node will output in fp16 when
this option is turned on.
Pytorch only filters for OOMs in its own allocators however there are
paths that can OOM on allocators made outside the pytorch allocators.
These manifest as an AllocatorError as pytorch does not have universal
error translation to its OOM type on exception. Handle it. A log I have
for this also shows a double report of the error async, so call the
async discarder to cleanup and make these OOMs look like OOMs.
* sd: add support for clip model reconstruction
* nodes: SetClipHooks: Demote the dynamic model patcher
* mp: Make dynamic_disable more robust
The backup need to not be cloned. In addition add a delegate object
to ModelPatcherDynamic so that non-cloning code can do
ModelPatcherDynamic demotion
* sampler_helpers: Demote to non-dynamic model patcher when hooking
* code rabbit review comments
* mp: attach re-construction arguments to model patcher
When making a model-patcher from a unet or ckpt, attach a callable
function that can be called to replay the model construction. This
can be used to deep clone model patcher WRT the actual model.
Originally written by Kosinkadink
f4b99bc623
* mp: Add disable_dynamic clone argument
Add a clone argument that lets a caller clone a ModelPatcher but disable
dynamic to demote the clone to regular MP. This is useful for legacy
features where dynamic_vram support is missing or TBD.
* torch_compile: disable dynamic_vram
This is a bigger feature. Disable for the interim to preserve
functionality.
* make setattr safe for non existent attributes
Handle the case where the attribute doesnt exist by returning a static
sentinel (distinct from None). If the sentinel is passed in as the set
value, del the attr.
* Account for dequantization and type-casts in offload costs
When measuring the cost of offload, identify weights that need a type
change or dequantization and add the size of the conversion result
to the offload cost.
This is mutually exclusive with lowvram patches which already has
a large conservative estimate and wont overlap the dequant cost so\
dont double count.
* Set the compute type on CLIP MPs
So that the loader can know the size of weights for dequant accounting.
* Add Kandinsky5 model support
lite and pro T2V tested to work
* Update kandinsky5.py
* Fix fp8
* Fix fp8_scaled text encoder
* Add transformer_options for attention
* Code cleanup, optimizations, use fp32 for all layers originally at fp32
* ImageToVideo -node
* Fix I2V, add necessary latent post process nodes
* Support text to image model
* Support block replace patches (SLG mostly)
* Support official LoRAs
* Don't scale RoPE for lite model as that just doesn't work...
* Update supported_models.py
* Rever RoPE scaling to simpler one
* Fix typo
* Handle latent dim difference for image model in the VAE instead
* Add node to use different prompts for clip_l and qwen25_7b
* Reduce peak VRAM usage a bit
* Further reduce peak VRAM consumption by chunking ffn
* Update chunking
* Update memory_usage_factor
* Code cleanup, don't force the fp32 layers as it has minimal effect
* Allow for stronger changes with first frames normalization
Default values are too weak for any meaningful changes, these should probably be exposed as advanced node options when that's available.
* Add image model's own chat template, remove unused image2video template
* Remove hard error in ReplaceVideoLatentFrames -node
* Update kandinsky5.py
* Update supported_models.py
* Fix typos in prompt template
They were now fixed in the original repository as well
* Update ReplaceVideoLatentFrames
Add tooltips
Make source optional
Better handle negative index
* Rename NormalizeVideoLatentFrames -node
For bit better clarity what it does
* Fix NormalizeVideoLatentStart node out on non-op
Im able to push vram above estimate on partial unload. Bump the
estimate. This is experimentally determined with a 720P and 480P
datapoint calibrating for 24GB VRAM total.
* Support video tiny VAEs
* lighttaew scaling fix
* Also support video taes in previews
Only first frame for now as live preview playback is currently only available through VHS custom nodes.
* Support Wan 2.1 lightVAE
* Relocate elif block and set Wan VAE dim directly without using pruning rate for lightvae