mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-03-17 07:05:12 +08:00
SageAttention's quantized kernels produce NaN in the Hunyuan3D v2.1 diffusion transformer, causing the downstream VoxelToMesh to generate zero vertices and crash in save_glb. Add low_precision_attention=False to both optimized_attention calls in the v2.1 DiT (CrossAttention and Attention classes), following the same pattern used by ACE (ace_step15.py). This makes SageAttention fall back to pytorch attention for Hunyuan3D only, while all other models keep the SageAttention speedup. Root cause: the 3D occupancy/SDF prediction requires higher numerical precision at voxel boundaries than SageAttention's quantized kernels provide. Image and video diffusion tolerate this precision loss. Fixes: comfyanonymous/ComfyUI#10943 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| ace | ||
| anima | ||
| audio | ||
| aura | ||
| cascade | ||
| chroma | ||
| chroma_radiance | ||
| cosmos | ||
| flux | ||
| genmo | ||
| hidream | ||
| hunyuan3d | ||
| hunyuan3dv2_1 | ||
| hunyuan_video | ||
| hydit | ||
| kandinsky5 | ||
| lightricks | ||
| lumina | ||
| mmaudio/vae | ||
| models | ||
| modules | ||
| omnigen | ||
| pixart | ||
| qwen_image | ||
| wan | ||
| common_dit.py | ||
| util.py | ||