mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-06 07:12:30 +08:00
* initial gemma4 support * parity with reference implementation outputs can 100% match transformers with same sdpa flags, checkpoint this and then optimize * Cleanup, video fixes * cleanup, enable fused rms norm by default * update comment * Cleanup * Update sd.py * Various fixes * Add fp8 scaled embedding support * small fixes * Translate think tokens * Fix image encoder attention mask type So it works with basic attention * Handle thinking tokens different only for Gemma4 * Code cleanup * Update nodes_textgen.py * Use embed scale class instead of buffer Slight difference to HF, but technically more accurate and simpler code * Default to fused rms_norm * Update gemma4.py |
||
|---|---|---|
| .. | ||
| diffusionmodules | ||
| distributions | ||
| encoders | ||
| attention.py | ||
| ema.py | ||
| sdpose.py | ||
| sub_quadratic_attention.py | ||
| temporal_ae.py | ||