mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-04-11 19:12:34 +08:00
PyTorch 2.8's autocast `prioritize` function doesn't handle FP8 ScalarTypes, causing sporadic "Unexpected floating ScalarType in at::autocast::prioritize" RuntimeError when fp8_e4m3fn_fast weights are used with the WAN 2.1 model. This wraps all torch.addcmul calls in the WAN attention blocks with autocast-disabled context when autocast is active, matching the existing pattern used in sub_quadratic_attention.py and other ComfyUI modules. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| model_animate.py | ||
| model_multitalk.py | ||
| model.py | ||
| vae2_2.py | ||
| vae.py | ||