mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-03-10 19:57:42 +08:00
DynamicVRAM's on-demand model loading/offloading conflicted with process isolation in three ways: RPC tensor transport stalls from mid-call GPU offload, race conditions between model lifecycle and active RPC operations, and false positive memory leak detection from changed finalizer patterns. - Marshal CUDA tensors to CPU before RPC transport for dynamic models - Add operation state tracking + quiescence waits at workflow boundaries - Distinguish proxy reference release from actual leaks in cleanup_models_gc - Fix init order: DynamicVRAM must initialize before isolation proxies - Add RPC timeouts to prevent indefinite hangs on model unavailability - Prevent proxy-of-proxy chains from DynamicVRAM model reload cycles - Add torch.device/torch.dtype serializers for new DynamicVRAM RPC paths - Guard isolation overhead so non-isolated workflows are unaffected - Migrate env var to PYISOLATE_CHILD |
||
|---|---|---|
| .. | ||
| proxies | ||
| __init__.py | ||
| adapter.py | ||
| child_hooks.py | ||
| clip_proxy.py | ||
| extension_loader.py | ||
| extension_wrapper.py | ||
| host_hooks.py | ||
| host_policy.py | ||
| manifest_loader.py | ||
| model_patcher_proxy_registry.py | ||
| model_patcher_proxy_utils.py | ||
| model_patcher_proxy.py | ||
| model_sampling_proxy.py | ||
| rpc_bridge.py | ||
| runtime_helpers.py | ||
| shm_forensics.py | ||
| vae_proxy.py | ||