mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-02-11 22:12:33 +08:00
Implement a cache sensitive to RAM pressure. When RAM headroom drops down below a certain threshold, evict RAM-expensive nodes from the cache. Models and tensors are measured directly for RAM usage. An OOM score is then computed based on the RAM usage of the node. Note the due to indirection through shared objects (like a model patcher), multiple nodes can account the same RAM as their individual usage. The intent is this will free chains of nodes particularly model loaders and associate loras as they all score similar and are sorted in close to each other. Has a bias towards unloading model nodes mid flow while being able to keep results like text encodings and VAE. |
||
|---|---|---|
| .. | ||
| caching.py | ||
| graph_utils.py | ||
| graph.py | ||
| progress.py | ||
| utils.py | ||
| validation.py | ||