mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-05-16 20:17:23 +08:00
## VRAM Detection (model_management.py) The DirectML code path had two hardcoded `1024 * 1024 * 1024 #TODO` values in `get_total_memory()` and `get_free_memory()`, causing ComfyUI to report only 1 GB of VRAM on any AMD/Intel GPU using the DirectML backend — regardless of actual hardware. This forced NORMAL_VRAM or LOW_VRAM calculations to be wildly wrong. Fix for `get_total_memory`: - On Windows, reads `HardwareInformation.qwMemorySize` from the GPU driver registry key via `winreg`. This is the 64-bit accurate value (unlike `Win32_VideoController.AdapterRAM` which overflows at 4 GB). - Allows override via `COMFYUI_DIRECTML_VRAM_MB` env var. - Falls back to 6 GB if registry query fails (safe default for modern dGPUs). Fix for `get_free_memory`: - Uses `torch_directml.gpu_memory(0)` to get per-tile usage fractions and derives free memory as `total * (1 - max_usage_fraction)`. ## torchaudio: optional import on AMD/DirectML torchaudio has a DLL incompatibility with torch-directml (which ships its own torch runtime). The following files had bare `import torchaudio` at module level, crashing ComfyUI startup entirely when torchaudio was absent: - comfy/ldm/lightricks/vae/audio_vae.py - comfy/audio_encoders/whisper.py - comfy/audio_encoders/audio_encoders.py - comfy_extras/nodes_audio.py - comfy_extras/nodes_lt.py - comfy_extras/nodes_wandancer.py Each import is wrapped in `try/except (ImportError, OSError): torchaudio = None`, matching the pattern already used in comfy/ldm/mmaudio/vae/autoencoder.py and comfy/ldm/ace/vae/music_dcae_pipeline.py. Audio nodes will degrade gracefully rather than preventing ComfyUI from starting. Tested on: AMD Radeon RX 5600 XT (6 GB VRAM, gfx1010, Windows 10) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| audio_encoders | ||
| background_removal | ||
| cldm | ||
| comfy_types | ||
| extra_samplers | ||
| image_encoders | ||
| k_diffusion | ||
| ldm | ||
| sd1_tokenizer | ||
| t2i_adapter | ||
| taesd | ||
| text_encoders | ||
| weight_adapter | ||
| bg_removal_model.py | ||
| cli_args.py | ||
| clip_config_bigg.json | ||
| clip_model.py | ||
| clip_vision_config_g.json | ||
| clip_vision_config_h.json | ||
| clip_vision_config_vitl_336_llava.json | ||
| clip_vision_config_vitl_336.json | ||
| clip_vision_config_vitl.json | ||
| clip_vision_siglip2_base_naflex.json | ||
| clip_vision_siglip_384.json | ||
| clip_vision_siglip_512.json | ||
| clip_vision.py | ||
| conds.py | ||
| context_windows.py | ||
| controlnet.py | ||
| deploy_environment.py | ||
| diffusers_convert.py | ||
| diffusers_load.py | ||
| float.py | ||
| gligen.py | ||
| hooks.py | ||
| latent_formats.py | ||
| lora_convert.py | ||
| lora.py | ||
| memory_management.py | ||
| model_base.py | ||
| model_detection.py | ||
| model_management.py | ||
| model_patcher.py | ||
| model_prefetch.py | ||
| model_sampling.py | ||
| nested_tensor.py | ||
| ops.py | ||
| options.py | ||
| patcher_extension.py | ||
| pinned_memory.py | ||
| pixel_space_convert.py | ||
| quant_ops.py | ||
| rmsnorm.py | ||
| sample.py | ||
| sampler_helpers.py | ||
| samplers.py | ||
| sd1_clip_config.json | ||
| sd1_clip.py | ||
| sd.py | ||
| sdxl_clip.py | ||
| supported_models_base.py | ||
| supported_models.py | ||
| utils.py | ||
| windows.py | ||