mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-12-27 23:30:56 +08:00
* Add --use-flash-attention flag. This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention. |
||
|---|---|---|
| .. | ||
| diffusionmodules | ||
| distributions | ||
| encoders | ||
| attention.py | ||
| ema.py | ||
| sub_quadratic_attention.py | ||
| temporal_ae.py | ||