ComfyUI/comfy/ldm/wan
rattus 035414ede4
Reduce WAN VAE VRAM, Save use cases for OOM/Tiler (#13014)
* wan: vae: encoder: Add feature cache layer that corks singles

If a downsample only gives you a single frame, save it to the feature
cache and return nothing to the top level. This increases the
efficiency of cacheability, but also prepares support for going two
by two rather than four by four on the frames.

* wan: remove all concatentation with the feature cache

The loopers are now responsible for ensuring that non-final frames are
processes at least two-by-two, elimiating the need for this cat case.

* wan: vae: recurse and chunk for 2+2 frames on decode

Avoid having to clone off slices of 4 frame chunks and reduce the size
of the big 6 frame convolutions down to 4. Save the VRAMs.

* wan: encode frames 2x2.

Reduce VRAM usage greatly by encoding frames 2 at a time rather than
4.

* wan: vae: remove cloning

The loopers now control the chunking such there is noever more than 2
frames, so just cache these slices directly and avoid the clone
allocations completely.

* wan: vae: free consumer caller tensors on recursion

* wan: vae: restyle a little to match LTX
2026-03-17 17:34:39 -04:00
..
model_animate.py [BlockInfo] Wan (#10845) 2025-12-15 17:59:16 -08:00
model_multitalk.py Support Multi/InfiniteTalk (#10179) 2026-01-21 23:09:48 -05:00
model.py feat: Support SCAIL WanVideo model (#12614) 2026-02-28 16:49:12 -05:00
vae2_2.py WAN2.2: Fix cache VRAM leak on error (#10308) 2025-10-13 15:23:11 -04:00
vae.py Reduce WAN VAE VRAM, Save use cases for OOM/Tiler (#13014) 2026-03-17 17:34:39 -04:00