model_patcher: guard against none model_dtype (#12410)
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run

Handle the case where the _model_dtype exists but is none with the
intended fallback.
This commit is contained in:
rattus 2026-02-11 11:54:02 -08:00 committed by GitHub
parent 2a4328d639
commit 3fe61cedda
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -1525,7 +1525,7 @@ class ModelPatcherDynamic(ModelPatcher):
setattr(m, param_key + "_function", weight_function) setattr(m, param_key + "_function", weight_function)
geometry = weight geometry = weight
if not isinstance(weight, QuantizedTensor): if not isinstance(weight, QuantizedTensor):
model_dtype = getattr(m, param_key + "_comfy_model_dtype", weight.dtype) model_dtype = getattr(m, param_key + "_comfy_model_dtype", None) or weight.dtype
weight._model_dtype = model_dtype weight._model_dtype = model_dtype
geometry = comfy.memory_management.TensorGeometry(shape=weight.shape, dtype=model_dtype) geometry = comfy.memory_management.TensorGeometry(shape=weight.shape, dtype=model_dtype)
return comfy.memory_management.vram_aligned_size(geometry) return comfy.memory_management.vram_aligned_size(geometry)
@ -1551,7 +1551,7 @@ class ModelPatcherDynamic(ModelPatcher):
weight.seed_key = key weight.seed_key = key
set_dirty(weight, dirty) set_dirty(weight, dirty)
geometry = weight geometry = weight
model_dtype = getattr(m, param + "_comfy_model_dtype", weight.dtype) model_dtype = getattr(m, param + "_comfy_model_dtype", None) or weight.dtype
geometry = comfy.memory_management.TensorGeometry(shape=weight.shape, dtype=model_dtype) geometry = comfy.memory_management.TensorGeometry(shape=weight.shape, dtype=model_dtype)
weight_size = geometry.numel() * geometry.element_size() weight_size = geometry.numel() * geometry.element_size()
if vbar is not None and not hasattr(weight, "_v"): if vbar is not None and not hasattr(weight, "_v"):