ComfyUI/comfy/ldm
rattus 783782d5d7
Implement block prefetch + Lora Async load + and adopt in LTX (Speedup!) (CORE-111) (#13618)
* mm: Use Aimdo raw allocator for cast buffers

pytorch manages allocation of growing buffers on streams poorly. Pyt
has no windows support for the expandable segments allocator (which is
the right tool for this job), while also segmenting the memory by
stream such that it can be generally re-used. So kick the problem to
aimdo which can just grow a virtual region thats freed per stream.

* plan

* ops: move cpu handler up to the caller

* ops: split up prefetch from weight prep block prefetching API

Split up the casting and weight formating/lora stuff in prep for
arbitrary prefetch support.

* ops: implement block prefetching API

allow a model to construct a prefetch list and operate it for increased
async offload.

* ltxv2: Implement block prefetching

* Implement lora async offload

Implement async offload of loras.
2026-05-02 19:23:24 -04:00
..
ace Support Ace Step 1.5 XL model. (#13317) 2026-04-07 03:13:47 -04:00
anima Fix anima LLM adapter forward when manual cast (#12504) 2026-02-17 07:56:44 -08:00
audio Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
aura Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
cascade cascade: remove dead weight init code (#13026) 2026-03-17 20:59:10 -04:00
chroma Implement NAG on all the models based on the Flux code. (#12500) 2026-02-16 23:30:34 -05:00
chroma_radiance Use torch RMSNorm for flux models and refactor hunyuan video code. (#12432) 2026-02-13 15:35:13 -05:00
cogvideo Cogvideox (#13402) 2026-04-29 19:30:08 -04:00
cosmos Some fixes to previous pr. (#12339) 2026-02-06 20:14:52 -05:00
ernie Some optimizations to make Ernie inference a bit faster. (#13472) 2026-04-18 23:02:29 -04:00
flux Add a supports_fp64 function. (#13368) 2026-04-11 21:06:36 -04:00
genmo Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hidream Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3d Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3dv2_1 fix: disable SageAttention for Hunyuan3D v2.1 DiT (#12772) 2026-03-16 22:27:27 -04:00
hunyuan_video Implement NAG on all the models based on the Flux code. (#12500) 2026-02-16 23:30:34 -05:00
hydit
kandinsky5 Fix qwen scaled fp8 not working with kandinsky. Make basic t2i wf work. (#11162) 2025-12-06 17:50:10 -08:00
lightricks Implement block prefetch + Lora Async load + and adopt in LTX (Speedup!) (CORE-111) (#13618) 2026-05-02 19:23:24 -04:00
lumina Feat: z-image pixel space (model still training atm) (#12709) 2026-03-02 19:43:47 -05:00
mmaudio/vae Implement the mmaudio VAE. (#10300) 2025-10-11 22:57:23 -04:00
models Add support for small flux.2 decoder (#13314) 2026-04-07 03:44:18 -04:00
modules feat: SUPIR model support (CORE-17) (#13250) 2026-04-18 23:02:01 -04:00
omnigen Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
pixart
qwen_image Add pre attention and post input patches to qwen image model. (#12879) 2026-03-11 00:09:35 -04:00
rt_detr CORE-13 feat: Support RT-DETRv4 detection model (#12748) 2026-03-28 23:34:10 -04:00
sam3 Disable sageattention for SAM3 (#13529) 2026-04-23 11:14:42 -07:00
supir feat: SUPIR model support (CORE-17) (#13250) 2026-04-18 23:02:01 -04:00
wan wan: vae: Fix light/color change (#13101) 2026-03-21 18:44:35 -04:00
common_dit.py
util.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00