mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-04-11 19:12:34 +08:00
Wraps torch.addcmul calls in WAN attention blocks with autocast-disabled context to prevent 'Unexpected floating ScalarType in at::autocast::prioritize' RuntimeError. This occurs when upstream nodes (e.g. SAM3) leave CUDA autocast enabled - PyTorch 2.8's autocast promote dispatch for addcmul hits an unhandled dtype in the prioritize function. Uses torch.is_autocast_enabled(device_type) (non-deprecated API) and only applies the workaround when autocast is actually active (zero overhead otherwise). |
||
|---|---|---|
| .. | ||
| model_animate.py | ||
| model_multitalk.py | ||
| model.py | ||
| vae2_2.py | ||
| vae.py | ||