ComfyUI/comfy/ldm/wan
Deep Mehta 8e4bc0edac fix: prevent autocast crash with fp8 weights in WAN model addcmul ops
PyTorch 2.8's autocast `prioritize` function doesn't handle FP8
ScalarTypes, causing sporadic "Unexpected floating ScalarType in
at::autocast::prioritize" RuntimeError when fp8_e4m3fn_fast weights
are used with the WAN 2.1 model. This wraps all torch.addcmul calls
in the WAN attention blocks with autocast-disabled context when
autocast is active, matching the existing pattern used in
sub_quadratic_attention.py and other ComfyUI modules.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-09 17:31:08 -07:00
..
model_animate.py [BlockInfo] Wan (#10845) 2025-12-15 17:59:16 -08:00
model_multitalk.py Support Multi/InfiniteTalk (#10179) 2026-01-21 23:09:48 -05:00
model.py fix: prevent autocast crash with fp8 weights in WAN model addcmul ops 2026-04-09 17:31:08 -07:00
vae2_2.py WAN2.2: Fix cache VRAM leak on error (#10308) 2025-10-13 15:23:11 -04:00
vae.py wan: vae: Fix light/color change (#13101) 2026-03-21 18:44:35 -04:00