ComfyUI/comfy/ldm
Jukka Seppänen be95871adc
feat: Gemma4 text generation support (CORE-30) (#13376)
* initial gemma4 support

* parity with reference implementation

outputs can 100% match transformers with same sdpa flags, checkpoint this and then optimize

* Cleanup, video fixes

* cleanup, enable fused rms norm by default

* update comment

* Cleanup

* Update sd.py

* Various fixes

* Add fp8 scaled embedding support

* small fixes

* Translate think tokens

* Fix image encoder attention mask type

So it works with basic attention

* Handle thinking tokens different only for Gemma4

* Code cleanup

* Update nodes_textgen.py

* Use embed scale class instead of buffer

Slight difference to HF, but technically more accurate and simpler code

* Default to fused rms_norm

* Update gemma4.py
2026-05-02 22:46:15 -04:00
..
ace Support Ace Step 1.5 XL model. (#13317) 2026-04-07 03:13:47 -04:00
anima Fix anima LLM adapter forward when manual cast (#12504) 2026-02-17 07:56:44 -08:00
audio Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
aura Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
cascade cascade: remove dead weight init code (#13026) 2026-03-17 20:59:10 -04:00
chroma Implement NAG on all the models based on the Flux code. (#12500) 2026-02-16 23:30:34 -05:00
chroma_radiance Use torch RMSNorm for flux models and refactor hunyuan video code. (#12432) 2026-02-13 15:35:13 -05:00
cogvideo Cogvideox (#13402) 2026-04-29 19:30:08 -04:00
cosmos Some fixes to previous pr. (#12339) 2026-02-06 20:14:52 -05:00
ernie Some optimizations to make Ernie inference a bit faster. (#13472) 2026-04-18 23:02:29 -04:00
flux Add a supports_fp64 function. (#13368) 2026-04-11 21:06:36 -04:00
genmo Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hidream Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3d Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
hunyuan3dv2_1 fix: disable SageAttention for Hunyuan3D v2.1 DiT (#12772) 2026-03-16 22:27:27 -04:00
hunyuan_video Implement NAG on all the models based on the Flux code. (#12500) 2026-02-16 23:30:34 -05:00
hydit
kandinsky5 Fix qwen scaled fp8 not working with kandinsky. Make basic t2i wf work. (#11162) 2025-12-06 17:50:10 -08:00
lightricks Implement block prefetch + Lora Async load + and adopt in LTX (Speedup!) (CORE-111) (#13618) 2026-05-02 19:23:24 -04:00
lumina Feat: z-image pixel space (model still training atm) (#12709) 2026-03-02 19:43:47 -05:00
mmaudio/vae Implement the mmaudio VAE. (#10300) 2025-10-11 22:57:23 -04:00
models Add support for small flux.2 decoder (#13314) 2026-04-07 03:44:18 -04:00
modules feat: Gemma4 text generation support (CORE-30) (#13376) 2026-05-02 22:46:15 -04:00
omnigen Enable Runtime Selection of Attention Functions (#9639) 2025-09-12 18:07:38 -04:00
pixart Remove windows line endings. (#8866) 2025-07-11 02:37:51 -04:00
qwen_image Add pre attention and post input patches to qwen image model. (#12879) 2026-03-11 00:09:35 -04:00
rt_detr CORE-13 feat: Support RT-DETRv4 detection model (#12748) 2026-03-28 23:34:10 -04:00
sam3 Disable sageattention for SAM3 (#13529) 2026-04-23 11:14:42 -07:00
supir feat: SUPIR model support (CORE-17) (#13250) 2026-04-18 23:02:01 -04:00
wan wan: vae: Fix light/color change (#13101) 2026-03-21 18:44:35 -04:00
common_dit.py
util.py New Year ruff cleanup. (#11595) 2026-01-01 22:06:14 -05:00