ComfyUI/comfy/ldm/lightricks
Octopus b8bb5427f9 fix: make attention_mask optional in LTXBaseModel.forward (fixes #13299)
The LTXBaseModel.forward and _forward methods required attention_mask as
a positional argument with no default value. However, LTXV.extra_conds
only conditionally adds attention_mask to model_conds when it is present
in kwargs. If attention_mask is not provided by the text encoder, the
diffusion_model forward call fails with:

  TypeError: LTXBaseModel.forward() missing 1 required positional argument: 'attention_mask'

The model already handles attention_mask=None correctly in both
_prepare_attention_mask and _prepare_context, so making the parameter
optional is the minimal safe fix. This also aligns with how
LTXAVDoubleStreamBlock.forward handles the parameter.
2026-04-06 13:17:51 +08:00
..
vae ltx: vae: Fix missing init variable (#13074) 2026-03-19 22:34:58 -04:00
vocoders LTX audio vae novram fixes. (#12796) 2026-03-05 16:31:28 -05:00
av_model.py feat: LTX2: Support reference audio (ID-LoRA) (#13111) 2026-03-23 18:22:24 -04:00
embeddings_connector.py Support the LTXAV 2.3 model. (#12773) 2026-03-04 20:06:20 -05:00
latent_upsampler.py Support the LTXV 2 model. (#11632) 2026-01-05 01:58:59 -05:00
model.py fix: make attention_mask optional in LTXBaseModel.forward (fixes #13299) 2026-04-06 13:17:51 +08:00
symmetric_patchifier.py Support the LTXV 2 model. (#11632) 2026-01-05 01:58:59 -05:00