fix: raise clear error when non-LLM model is used with TextGenerate node (fixes #13286)

When a user connects a CLIP text encoder (e.g. CLIPTextModel) to the
TextGenerate node instead of a language model (LLM), the previous
behavior was an unhelpful AttributeError. Now a RuntimeError is raised
with a clear explanation of what model type is required.
This commit is contained in:
Octopus 2026-04-05 13:21:53 +08:00
parent eb0686bbb6
commit 76fe91c906

View File

@ -424,6 +424,12 @@ class CLIP:
return self.patcher.get_key_patches()
def generate(self, tokens, do_sample=True, max_length=256, temperature=1.0, top_k=50, top_p=0.95, min_p=0.0, repetition_penalty=1.0, seed=None, presence_penalty=0.0):
if not hasattr(self.cond_stage_model, 'generate'):
raise RuntimeError(
f"The loaded model ({type(self.cond_stage_model).__name__}) does not support text generation. "
"The TextGenerate node requires a language model (LLM) such as Qwen, LLaMA, or Gemma, "
"not a CLIP text encoder. Please load the correct model type."
)
self.cond_stage_model.reset_clip_options()
self.load_model(tokens)