ComfyUI/comfy
Emiliooooo e860732dba fix(directml): correct VRAM detection and make torchaudio imports optional
## VRAM Detection (model_management.py)

The DirectML code path had two hardcoded `1024 * 1024 * 1024 #TODO` values
in `get_total_memory()` and `get_free_memory()`, causing ComfyUI to report
only 1 GB of VRAM on any AMD/Intel GPU using the DirectML backend — regardless
of actual hardware. This forced NORMAL_VRAM or LOW_VRAM calculations to be
wildly wrong.

Fix for `get_total_memory`:
- On Windows, reads `HardwareInformation.qwMemorySize` from the GPU driver
  registry key via `winreg`. This is the 64-bit accurate value (unlike
  `Win32_VideoController.AdapterRAM` which overflows at 4 GB).
- Allows override via `COMFYUI_DIRECTML_VRAM_MB` env var.
- Falls back to 6 GB if registry query fails (safe default for modern dGPUs).

Fix for `get_free_memory`:
- Uses `torch_directml.gpu_memory(0)` to get per-tile usage fractions and
  derives free memory as `total * (1 - max_usage_fraction)`.

## torchaudio: optional import on AMD/DirectML

torchaudio has a DLL incompatibility with torch-directml (which ships its own
torch runtime). The following files had bare `import torchaudio` at module
level, crashing ComfyUI startup entirely when torchaudio was absent:

- comfy/ldm/lightricks/vae/audio_vae.py
- comfy/audio_encoders/whisper.py
- comfy/audio_encoders/audio_encoders.py
- comfy_extras/nodes_audio.py
- comfy_extras/nodes_lt.py
- comfy_extras/nodes_wandancer.py

Each import is wrapped in `try/except (ImportError, OSError): torchaudio = None`,
matching the pattern already used in comfy/ldm/mmaudio/vae/autoencoder.py and
comfy/ldm/ace/vae/music_dcae_pipeline.py. Audio nodes will degrade gracefully
rather than preventing ComfyUI from starting.

Tested on: AMD Radeon RX 5600 XT (6 GB VRAM, gfx1010, Windows 10)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 12:10:31 -04:00
..
audio_encoders fix(directml): correct VRAM detection and make torchaudio imports optional 2026-05-14 12:10:31 -04:00
background_removal Add support for BiRefNet background remove model (CORE-46) (#12747) 2026-05-08 17:59:24 +08:00
cldm Add better error message for common error. (#10846) 2025-11-23 04:55:22 -05:00
comfy_types fix: use frontend-compatible format for Float gradient_stops (#12789) 2026-03-12 10:14:28 -07:00
extra_samplers
image_encoders
k_diffusion feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
ldm fix(directml): correct VRAM detection and make torchaudio imports optional 2026-05-14 12:10:31 -04:00
sd1_tokenizer
t2i_adapter
taesd Add high quality preview support for Flux2 latents (#13496) 2026-04-29 19:37:30 -04:00
text_encoders Fix dtype issue with hidream o1 (#13849) 2026-05-11 20:53:13 -07:00
weight_adapter MPDynamic: force load flux img_in weight (Fixes flux1 canny+depth lora crash) (#12446) 2026-02-15 20:30:09 -05:00
bg_removal_model.py fix: Fix bug when mask not on same device (CORE-181) (#13801) 2026-05-08 23:06:29 +08:00
cli_args.py feat: add generic --feature-flag CLI arg and --list-feature-flags registry (#13685) 2026-05-04 19:50:26 -07:00
clip_config_bigg.json
clip_model.py Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_config_g.json
clip_vision_config_h.json
clip_vision_config_vitl_336_llava.json
clip_vision_config_vitl_336.json
clip_vision_config_vitl.json
clip_vision_siglip2_base_naflex.json Support the siglip 2 naflex model as a clip vision model. (#11831) 2026-01-12 17:05:54 -05:00
clip_vision_siglip_384.json
clip_vision_siglip_512.json
clip_vision.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00
conds.py Cleanups to the last PR. (#12646) 2026-02-26 01:30:31 -05:00
context_windows.py feat: Context windows - add causal_window_fix to improve blending of context windows (CORE-100) (#13563) 2026-05-05 16:40:53 -07:00
controlnet.py Add working Qwen 2512 ControlNet (Fun ControlNet) support (#12359) 2026-02-13 22:23:52 -05:00
deploy_environment.py Add deploy environment header (Comfy-Env) to partner node API calls (#13425) 2026-05-04 20:17:56 -07:00
diffusers_convert.py
diffusers_load.py
float.py feat: Support mxfp8 (#12907) 2026-03-14 18:36:29 -04:00
gligen.py
hooks.py Fix typos (#10986) 2026-05-08 17:14:45 +08:00
latent_formats.py feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
lora_convert.py Use torch RMSNorm for flux models and refactor hunyuan video code. (#12432) 2026-02-13 15:35:13 -05:00
lora.py Support anima TE lora kohya format. (#13847) 2026-05-11 20:01:52 -07:00
memory_management.py Integrate RAM cache with model RAM management (#13173) 2026-03-27 21:34:16 -04:00
model_base.py feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
model_detection.py feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
model_management.py fix(directml): correct VRAM detection and make torchaudio imports optional 2026-05-14 12:10:31 -04:00
model_patcher.py model_patcher: Fix safetensors saving of fp8 (#13835) 2026-05-11 12:48:10 -07:00
model_prefetch.py prefetch: guard against no offload (#13703) 2026-05-04 12:56:05 -07:00
model_sampling.py feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
nested_tensor.py WIP way to support multi multi dimensional latents. (#10456) 2025-10-23 21:21:14 -04:00
ops.py feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
options.py
patcher_extension.py Fix order of inputs nested merge_nested_dicts (#10362) 2025-10-15 16:47:26 -07:00
pinned_memory.py dynamicVRAM + --cache-ram 2 (CORE-117) (#13603) 2026-04-28 19:15:02 -04:00
pixel_space_convert.py
quant_ops.py Enable triton comfy kitchen via cli-arg (#12730) 2026-05-03 14:07:21 -04:00
rmsnorm.py feat: Gemma4 text generation support (CORE-30) (#13376) 2026-05-02 22:46:15 -04:00
sample.py Fix fp16 intermediates giving different results. (#13100) 2026-03-21 17:53:25 -04:00
sampler_helpers.py make control-net load order deterministic (#13701) 2026-05-04 12:58:06 -07:00
samplers.py Fix sampling issue with fp16 intermediates. (#13099) 2026-03-21 17:47:42 -04:00
sd1_clip_config.json
sd1_clip.py feat: Support Qwen3.5 text generation models (#12771) 2026-03-25 22:48:28 -04:00
sd.py feat: Support HiDream-O1-Image (CORE-187) (#13817) 2026-05-11 20:35:53 -07:00
sdxl_clip.py
supported_models_base.py Fix some custom nodes. (#11134) 2025-12-05 18:25:31 -05:00
supported_models.py Better Hidream O1 mem usage factor for non dynamic vram. (#13864) 2026-05-12 20:55:38 -07:00
utils.py Fix void failing with RuntimeError: start (0) + length (464) exceeds dimension size (461). (#13873) 2026-05-13 12:37:30 -07:00
windows.py Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading (#11845) 2026-02-01 01:01:11 -05:00