ComfyUI/tests/isolation/test_model_management_proxy.py
John Pollock c5e7b9cdaf feat(isolation): process isolation for custom nodes via pyisolate
Adds opt-in process isolation for custom nodes using pyisolate's
bwrap sandbox and JSON-RPC bridge. Each isolated node pack runs in
its own child process with zero-copy tensor transfer via shared memory.

Core infrastructure:
- CLI flag --use-process-isolation to enable isolation
- Host/child startup fencing via PYISOLATE_CHILD env var
- Manifest-driven node discovery and extension loading
- JSON-RPC bridge between host and child processes
- Shared memory forensics for leak detection

Proxy layer:
- ModelPatcher, CLIP, VAE, and ModelSampling proxies
- Host service proxies (folder_paths, model_management, progress, etc.)
- Proxy base with automatic method forwarding

Execution integration:
- Extension wrapper with V3 hidden param mapping
- Runtime helpers for isolated node execution
- Host policy for node isolation decisions
- Fenced sampler device handling and model ejection parity

Serializers for cross-process data transfer:
- File3D (GLB), PLY (structured + gaussian), NPZ (streaming frames),
  VIDEO (VideoFromFile + VideoFromComponents) serializers
- data_type flag in SerializerRegistry for type-aware dispatch
- Isolated get_temp_directory() fence

New core save nodes:
- SavePLY and SaveNPZ with comfytype registrations (Ply, Npz)

DynamicVRAM compatibility:
- comfy-aimdo early init gated by isolation fence

Tests:
- Integration and policy tests for isolation lifecycle
- Manifest loader, host policy, proxy, and adapter unit tests

Depends on: pyisolate >= 0.9.2
2026-03-12 01:13:43 -05:00

51 lines
2.1 KiB
Python

"""Unit tests for ModelManagementProxy."""
import pytest
import torch
from comfy.isolation.proxies.model_management_proxy import ModelManagementProxy
class TestModelManagementProxy:
"""Test ModelManagementProxy methods."""
@pytest.fixture
def proxy(self):
"""Create a ModelManagementProxy instance for testing."""
return ModelManagementProxy()
def test_get_torch_device_returns_device(self, proxy):
"""Verify get_torch_device returns a torch.device object."""
result = proxy.get_torch_device()
assert isinstance(result, torch.device), f"Expected torch.device, got {type(result)}"
def test_get_torch_device_is_valid(self, proxy):
"""Verify get_torch_device returns a valid device (cpu or cuda)."""
result = proxy.get_torch_device()
assert result.type in ("cpu", "cuda"), f"Unexpected device type: {result.type}"
def test_get_torch_device_name_returns_string(self, proxy):
"""Verify get_torch_device_name returns a non-empty string."""
device = proxy.get_torch_device()
result = proxy.get_torch_device_name(device)
assert isinstance(result, str), f"Expected str, got {type(result)}"
assert len(result) > 0, "Device name is empty"
def test_get_torch_device_name_with_cpu(self, proxy):
"""Verify get_torch_device_name works with CPU device."""
cpu_device = torch.device("cpu")
result = proxy.get_torch_device_name(cpu_device)
assert isinstance(result, str), f"Expected str, got {type(result)}"
assert "cpu" in result.lower(), f"Expected 'cpu' in device name, got: {result}"
def test_get_torch_device_name_with_cuda_if_available(self, proxy):
"""Verify get_torch_device_name works with CUDA device if available."""
if not torch.cuda.is_available():
pytest.skip("CUDA not available")
cuda_device = torch.device("cuda:0")
result = proxy.get_torch_device_name(cuda_device)
assert isinstance(result, str), f"Expected str, got {type(result)}"
# Should contain device identifier
assert len(result) > 0, "CUDA device name is empty"