ComfyUI/comfy/ldm/wan
John Pollock 9250191c65 feat(isolation): DynamicVRAM compatibility for process isolation
DynamicVRAM's on-demand model loading/offloading conflicted with  process isolation in three ways: RPC tensor transport stalls from mid-call GPU offload, race conditions between model lifecycle and active RPC operations, and false positive memory leak detection from changed finalizer patterns.

- Marshal CUDA tensors to CPU before RPC transport for dynamic models
- Add operation state tracking + quiescence waits at workflow boundaries
- Distinguish proxy reference release from actual leaks in cleanup_models_gc
- Fix init order: DynamicVRAM must initialize before isolation proxies
- Add RPC timeouts to prevent indefinite hangs on model unavailability
- Prevent proxy-of-proxy chains from DynamicVRAM model reload cycles
- Add torch.device/torch.dtype serializers for new DynamicVRAM RPC paths
- Guard isolation overhead so non-isolated workflows are unaffected
- Migrate env var to PYISOLATE_CHILD
2026-03-04 23:48:02 -06:00
..
model_animate.py [BlockInfo] Wan (#10845) 2025-12-15 17:59:16 -08:00
model_multitalk.py Support Multi/InfiniteTalk (#10179) 2026-01-21 23:09:48 -05:00
model.py feat(isolation): DynamicVRAM compatibility for process isolation 2026-03-04 23:48:02 -06:00
vae2_2.py WAN2.2: Fix cache VRAM leak on error (#10308) 2025-10-13 15:23:11 -04:00
vae.py Class WanVAE, def encode, feat_map is using self.decoder instead of self.encoder (#12682) 2026-02-27 19:03:45 -05:00