ComfyUI/comfy/ldm/wan
Deep Mehta 7db81e43df fix: prevent autocast crash in WAN model addcmul ops
Wraps torch.addcmul calls in WAN attention blocks with autocast-disabled
context to prevent 'Unexpected floating ScalarType in at::autocast::prioritize'
RuntimeError. This occurs when upstream nodes (e.g. SAM3) leave CUDA autocast
enabled - PyTorch 2.8's autocast promote dispatch for addcmul hits an unhandled
dtype in the prioritize function.

Uses torch.is_autocast_enabled(device_type) (non-deprecated API) and only
applies the workaround when autocast is actually active (zero overhead otherwise).
2026-04-09 21:08:49 -07:00
..
model_animate.py [BlockInfo] Wan (#10845) 2025-12-15 17:59:16 -08:00
model_multitalk.py Support Multi/InfiniteTalk (#10179) 2026-01-21 23:09:48 -05:00
model.py fix: prevent autocast crash in WAN model addcmul ops 2026-04-09 21:08:49 -07:00
vae2_2.py WAN2.2: Fix cache VRAM leak on error (#10308) 2025-10-13 15:23:11 -04:00
vae.py wan: vae: Fix light/color change (#13101) 2026-03-21 18:44:35 -04:00