mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-04-15 13:02:35 +08:00
The LTXBaseModel.forward and _forward methods required attention_mask as a positional argument with no default value. However, LTXV.extra_conds only conditionally adds attention_mask to model_conds when it is present in kwargs. If attention_mask is not provided by the text encoder, the diffusion_model forward call fails with: TypeError: LTXBaseModel.forward() missing 1 required positional argument: 'attention_mask' The model already handles attention_mask=None correctly in both _prepare_attention_mask and _prepare_context, so making the parameter optional is the minimal safe fix. This also aligns with how LTXAVDoubleStreamBlock.forward handles the parameter. |
||
|---|---|---|
| .. | ||
| vae | ||
| vocoders | ||
| av_model.py | ||
| embeddings_connector.py | ||
| latent_upsampler.py | ||
| model.py | ||
| symmetric_patchifier.py | ||