mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-09 00:32:31 +08:00
* initial gemma4 support * parity with reference implementation outputs can 100% match transformers with same sdpa flags, checkpoint this and then optimize * Cleanup, video fixes * cleanup, enable fused rms norm by default * update comment * Cleanup * Update sd.py * Various fixes * Add fp8 scaled embedding support * small fixes * Translate think tokens * Fix image encoder attention mask type So it works with basic attention * Handle thinking tokens different only for Gemma4 * Code cleanup * Update nodes_textgen.py * Use embed scale class instead of buffer Slight difference to HF, but technically more accurate and simpler code * Default to fused rms_norm * Update gemma4.py |
||
|---|---|---|
| .. | ||
| ace | ||
| anima | ||
| audio | ||
| aura | ||
| cascade | ||
| chroma | ||
| chroma_radiance | ||
| cogvideo | ||
| cosmos | ||
| ernie | ||
| flux | ||
| genmo | ||
| hidream | ||
| hunyuan3d | ||
| hunyuan3dv2_1 | ||
| hunyuan_video | ||
| hydit | ||
| kandinsky5 | ||
| lightricks | ||
| lumina | ||
| mmaudio/vae | ||
| models | ||
| modules | ||
| omnigen | ||
| pixart | ||
| qwen_image | ||
| rt_detr | ||
| sam3 | ||
| supir | ||
| wan | ||
| common_dit.py | ||
| util.py | ||