ComfyUI/comfy/ldm
Luca Wehrstedt 2fde495c33 Replace xformers FMHA imports with mslk package
The xFormers project has migrated its fused multi-head attention (FMHA)
implementation to a new standalone package called `mslk` (Meta
Superintelligence Labs Kernels). The `xformers` package now re-exports
from `mslk` for backward compatibility, but direct dependence on `mslk`
is preferred going forward.

This commit updates all FMHA import sites to use
`mslk.attention.fmha` instead of `xformers.ops`. All user-facing
behavior -- CLI arguments, environment variables, log messages, error
messages, and documentation -- remains unchanged.

What changed:
- `import xformers` / `import xformers.ops` replaced with
  `import mslk` / `import mslk.attention.fmha` in:
    comfy/model_management.py
    comfy/ldm/modules/attention.py
    comfy/ldm/modules/diffusionmodules/model.py
    comfy/ldm/pixart/blocks.py
- Calls to `xformers.ops.memory_efficient_attention(...)` replaced with
  `mslk.attention.fmha.memory_efficient_attention(...)`.
- Version-gating logic for old xformers bugs (0.0.18, 0.0.2x) removed,
  as those versions predate the mslk migration.
- The pip dependency is now `mslk` rather than `xformers`.

This migration was prepared by the xFormers team. We have done our best
to ensure correctness and preserve all existing behavior, but we welcome
feedback from maintainers if anything should be adjusted.
2026-04-23 04:14:12 -07:00
..
ace Support Ace Step 1.5 XL model. (#13317) 2026-04-07 03:13:47 -04:00
anima Fix anima LLM adapter forward when manual cast (#12504) 2026-02-17 07:56:44 -08:00
audio Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
aura Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
cascade cascade: remove dead weight init code (#13026) 2026-03-17 20:59:10 -04:00
chroma Implement NAG on all the models based on the Flux code. (#12500) 2026-02-16 23:30:34 -05:00
chroma_radiance Use torch RMSNorm for flux models and refactor hunyuan video code. (#12432) 2026-02-13 15:35:13 -05:00
cosmos Some fixes to previous pr. (#12339) 2026-02-06 20:14:52 -05:00
ernie Some optimizations to make Ernie inference a bit faster. (#13472) 2026-04-18 23:02:29 -04:00
flux Add a supports_fp64 function. (#13368) 2026-04-11 21:06:36 -04:00
genmo Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hidream Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3d Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3dv2_1 fix: disable SageAttention for Hunyuan3D v2.1 DiT (#12772) 2026-03-16 22:27:27 -04:00
hunyuan_video Implement NAG on all the models based on the Flux code. (#12500) 2026-02-16 23:30:34 -05:00
hydit Change cosmos and hydit models to use the native RMSNorm. (#7934) 2025-05-04 06:26:20 -04:00
kandinsky5 Fix qwen scaled fp8 not working with kandinsky. Make basic t2i wf work. (#11162) 2025-12-06 17:50:10 -08:00
lightricks Make the ltx audio vae more native. (#13486) 2026-04-21 11:02:42 -04:00
lumina Feat: z-image pixel space (model still training atm) (#12709) 2026-03-02 19:43:47 -05:00
mmaudio/vae Implement the mmaudio VAE. (#10300) 2025-10-11 22:57:23 -04:00
models Add support for small flux.2 decoder (#13314) 2026-04-07 03:44:18 -04:00
modules Replace xformers FMHA imports with mslk package 2026-04-23 04:14:12 -07:00
omnigen Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
pixart Replace xformers FMHA imports with mslk package 2026-04-23 04:14:12 -07:00
qwen_image Add pre attention and post input patches to qwen image model. (#12879) 2026-03-11 00:09:35 -04:00
rt_detr CORE-13 feat: Support RT-DETRv4 detection model (#12748) 2026-03-28 23:34:10 -04:00
supir feat: SUPIR model support (CORE-17) (#13250) 2026-04-18 23:02:01 -04:00
wan wan: vae: Fix light/color change (#13101) 2026-03-21 18:44:35 -04:00
common_dit.py add RMSNorm to comfy.ops 2025-04-14 18:00:33 -04:00
util.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00