mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-03 13:52:31 +08:00
The xFormers project has migrated its fused multi-head attention (FMHA)
implementation to a new standalone package called `mslk` (Meta
Superintelligence Labs Kernels). The `xformers` package now re-exports
from `mslk` for backward compatibility, but direct dependence on `mslk`
is preferred going forward.
This commit updates all FMHA import sites to use
`mslk.attention.fmha` instead of `xformers.ops`. All user-facing
behavior -- CLI arguments, environment variables, log messages, error
messages, and documentation -- remains unchanged.
What changed:
- `import xformers` / `import xformers.ops` replaced with
`import mslk` / `import mslk.attention.fmha` in:
comfy/model_management.py
comfy/ldm/modules/attention.py
comfy/ldm/modules/diffusionmodules/model.py
comfy/ldm/pixart/blocks.py
- Calls to `xformers.ops.memory_efficient_attention(...)` replaced with
`mslk.attention.fmha.memory_efficient_attention(...)`.
- Version-gating logic for old xformers bugs (0.0.18, 0.0.2x) removed,
as those versions predate the mslk migration.
- The pip dependency is now `mslk` rather than `xformers`.
This migration was prepared by the xFormers team. We have done our best
to ensure correctness and preserve all existing behavior, but we welcome
feedback from maintainers if anything should be adjusted.
|
||
|---|---|---|
| .. | ||
| diffusionmodules | ||
| distributions | ||
| encoders | ||
| attention.py | ||
| ema.py | ||
| sdpose.py | ||
| sub_quadratic_attention.py | ||
| temporal_ae.py | ||