mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-02-03 02:00:29 +08:00
Some checks are pending
Python Linting / Run Ruff (push) Waiting to run
Python Linting / Run Pylint (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.10, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.11, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-stable (12.1, , linux, 3.12, [self-hosted Linux], stable) (push) Waiting to run
Full Comfy CI Workflow Runs / test-unix-nightly (12.1, , linux, 3.11, [self-hosted Linux], nightly) (push) Waiting to run
Execution Tests / test (macos-latest) (push) Waiting to run
Execution Tests / test (ubuntu-latest) (push) Waiting to run
Execution Tests / test (windows-latest) (push) Waiting to run
Test server launches without errors / test (push) Waiting to run
Unit Tests / test (macos-latest) (push) Waiting to run
Unit Tests / test (ubuntu-latest) (push) Waiting to run
Unit Tests / test (windows-2022) (push) Waiting to run
pinned memory was converted back to pinning the CPU side weight without any changes. Fix the pinner to use the CPU weight and not the model defined geometry. This will either save RAM or stop buffer overruns when the types mismatch. Fix the model defined weight caster to use the [ s.weight, s.bias ] interpretation, as xfer_dest might be the flattened pin now. Fix the detection of needing to cast to not be conditional on !pin.
30 lines
856 B
Python
30 lines
856 B
Python
import torch
|
|
import comfy.model_management
|
|
import comfy.memory_management
|
|
|
|
from comfy.cli_args import args
|
|
|
|
def get_pin(module):
|
|
return getattr(module, "_pin", None)
|
|
|
|
def pin_memory(module):
|
|
if module.pin_failed or args.disable_pinned_memory or get_pin(module) is not None:
|
|
return
|
|
#FIXME: This is a RAM cache trigger event
|
|
size = comfy.memory_management.vram_aligned_size([ module.weight, module.bias ])
|
|
pin = torch.empty((size,), dtype=torch.uint8)
|
|
if comfy.model_management.pin_memory(pin):
|
|
module._pin = pin
|
|
else:
|
|
module.pin_failed = True
|
|
return False
|
|
return True
|
|
|
|
def unpin_memory(module):
|
|
if get_pin(module) is None:
|
|
return 0
|
|
size = module._pin.numel() * module._pin.element_size()
|
|
comfy.model_management.unpin_memory(module._pin)
|
|
del module._pin
|
|
return size
|