ComfyUI/comfy/text_encoders
jaystack.dev d22cc94f53
fix: ensure LTXAVTEModel uses half-precision for SageAttention compatibility
- Add automatic detection and default to bfloat16 (or fp16 fallback) when no explicit dtype is provided, based on device capabilities
- Respect provided dtype_llama/dtype consistently across Gemma model, projection layer, and connectors
- Remove forced `out.float()` in encode_token_weights to prevent downgrading to fp32 after projection
- This allows SageAttention's optimized kernel to run instead of falling back to PyTorch attention

Fixes the warning:
"Error running sage attention: Input tensors must be in dtype of torch.float16 or torch.bfloat16, using pytorch attention instead."
2026-01-15 14:44:03 -05:00
..
ace_lyrics_tokenizer Initial ACE-Step model implementation. (#7972) 2025-05-07 08:33:34 -04:00
byt5_tokenizer Support hunyuan image 2.1 regular model. (#9792) 2025-09-10 02:05:07 -04:00
hydit_clip_tokenizer
llama_tokenizer
qwen25_tokenizer Update qwen tokenizer to add qwen 3 tokens. (#11029) 2025-12-01 17:13:48 -05:00
t5_pile_tokenizer
t5_tokenizer
ace_text_cleaners.py Make japanese hiragana and katakana characters work with ACE. (#7997) 2025-05-08 03:32:36 -04:00
ace.py Make japanese hiragana and katakana characters work with ACE. (#7997) 2025-05-08 03:32:36 -04:00
aura_t5.py More flexible long clip support. 2025-04-15 10:32:21 -04:00
bert.py P2 of qwen edit model. (#9412) 2025-08-18 22:38:34 -04:00
byt5_config_small_glyph.json Support hunyuan image 2.1 regular model. (#9792) 2025-09-10 02:05:07 -04:00
cosmos.py Fix chroma fp8 te being treated as fp16. (#11795) 2026-01-10 14:40:42 -08:00
flux.py Flux2 Klein support. (#11890) 2026-01-15 10:33:15 -05:00
genmo.py Fix chroma fp8 te being treated as fp16. (#11795) 2026-01-10 14:40:42 -08:00
hidream.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
hunyuan_image.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
hunyuan_video.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
hydit_clip.json
hydit.py Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
jina_clip_2.py Implement Jina CLIP v2 and NewBie dual CLIP (#11415) 2025-12-20 00:57:22 -05:00
kandinsky5.py Fix qwen scaled fp8 not working with kandinsky. Make basic t2i wf work. (#11162) 2025-12-06 17:50:10 -08:00
llama.py Flux2 Klein support. (#11890) 2026-01-15 10:33:15 -05:00
long_clipl.py Cleanup. 2025-04-15 12:13:28 -04:00
lt.py fix: ensure LTXAVTEModel uses half-precision for SageAttention compatibility 2026-01-15 14:44:03 -05:00
lumina2.py Only apply gemma quant config to gemma model for newbie. (#11436) 2025-12-20 01:02:43 -05:00
mt5_config_xl.json
newbie.py Only apply gemma quant config to gemma model for newbie. (#11436) 2025-12-20 01:02:43 -05:00
omnigen2.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
ovis.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
pixart_t5.py Fix chroma fp8 te being treated as fp16. (#11795) 2026-01-10 14:40:42 -08:00
qwen_image.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
qwen_vl.py P2 of qwen edit model. (#9412) 2025-08-18 22:38:34 -04:00
sa_t5.py More flexible long clip support. 2025-04-15 10:32:21 -04:00
sd2_clip_config.json
sd2_clip.py More flexible long clip support. 2025-04-15 10:32:21 -04:00
sd3_clip.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
spiece_tokenizer.py Fix hard crash when the spiece tokenizer path is bad. 2025-04-19 15:55:43 -04:00
t5_config_base.json
t5_config_xxl.json
t5_old_config_xxl.json
t5_pile_config_xl.json
t5.py P2 of qwen edit model. (#9412) 2025-08-18 22:38:34 -04:00
umt5_config_base.json Initial ACE-Step model implementation. (#7972) 2025-05-07 08:33:34 -04:00
umt5_config_xxl.json WIP support for Wan t2v model. 2025-02-25 17:20:35 -05:00
wan.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00
z_image.py Make old scaled fp8 format use the new mixed quant ops system. (#11000) 2025-12-05 14:35:42 -05:00