mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-14 11:07:24 +08:00
Skip aimdo.so dlopen on WSL to mirror init_device guard (#13458)
`main.py` calls `comfy_aimdo.control.init()` at the top of the file —
`ctypes.CDLL("aimdo.so", mode=RTLD_NOW | RTLD_GLOBAL)` — gated only on
`enables_dynamic_vram()`. The downstream `init_device()` call further
down has a stricter guard (`is_nvidia() and not is_wsl()`), but by then
the library has already been loaded and any module-load side effects
(CUDA hook installation, allocator interposition via RTLD_GLOBAL symbol
replacement) are in place regardless of whether `init_device()` ever
runs.
That asymmetry explains the regression in #13458: WSL + NVIDIA users
on the auto-disable path still trip the bug at `loss.backward()` even
though `aimdo_enabled` stays `False` and every `aimdo_enabled`-gated
codepath in `ops.py`, `model_patcher.py`, `model_management.py`,
`model_detection.py` and `utils.py` falls through to legacy behaviour.
The only thing `--disable-dynamic-vram` actually changes for those
users is short-circuiting `enables_dynamic_vram()`, which prevents the
`dlopen`.
Mirror the WSL exclusion from the `init_device` guard at the dlopen
site too. Use a small inline helper that calls `platform.uname()`
directly because the full `comfy.model_management.is_wsl()` lives
behind a torch import that intentionally happens after this load on
non-WSL configurations. The explicit `--enable-dynamic-vram` flag
still forces the load through, matching the override semantics of
the `init_device` guard.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
20f5e474da
commit
df5fe86512
24
main.py
24
main.py
@ -39,9 +39,31 @@ if __name__ == "__main__":
|
||||
|
||||
faulthandler.enable(file=sys.stderr, all_threads=False)
|
||||
|
||||
import platform
|
||||
|
||||
import comfy_aimdo.control
|
||||
|
||||
if enables_dynamic_vram():
|
||||
|
||||
def _is_wsl_pre_torch():
|
||||
"""Mirror of `comfy.model_management.is_wsl()` for use before torch is
|
||||
imported. The full implementation lives in `comfy.model_management`,
|
||||
which transitively imports torch — and `aimdo.so` is loaded below in
|
||||
non-WSL configurations *before* that import so its CUDA hooks are in
|
||||
place when torch initializes its bindings."""
|
||||
version = platform.uname().release
|
||||
return version.endswith("-Microsoft") or version.endswith("microsoft-standard-WSL2")
|
||||
|
||||
|
||||
# Mirror the WSL exclusion from the `init_device` guard further down so
|
||||
# `aimdo.so` is not `dlopen`'d at all on WSL. The library is loaded with
|
||||
# `RTLD_NOW | RTLD_GLOBAL`, and its module-load side effects (CUDA hook
|
||||
# installation, allocator interposition) apply regardless of whether
|
||||
# `init_device()` is later called — which is why setting
|
||||
# `--disable-dynamic-vram` is currently the only thing that prevents the
|
||||
# regression reported in #13458 on WSL + NVIDIA. The explicit
|
||||
# `--enable-dynamic-vram` flag still forces the load through, matching
|
||||
# the override semantics of the `init_device` guard.
|
||||
if args.enable_dynamic_vram or (enables_dynamic_vram() and not _is_wsl_pre_torch()):
|
||||
comfy_aimdo.control.init()
|
||||
|
||||
if os.name == "nt":
|
||||
|
||||
Loading…
Reference in New Issue
Block a user