ComfyUI/tests-unit/comfy_test
Tsondo bb31f8b707 fix: per-device fp8/nvfp4 compute detection for multi-GPU setups
supports_fp8_compute() and supports_nvfp4_compute() used the global
is_nvidia() check which ignores the device argument, then defaulted
to cuda:0 when device was None. In heterogeneous multi-GPU setups
(e.g. RTX 5070 + RTX 3090 Ti) this causes the wrong GPU's compute
capability to be checked, incorrectly disabling fp8 on capable
devices.

Replace the global is_nvidia() gate with per-device checks:
- Default device=None to get_torch_device() explicitly
- Early-return False for CPU/MPS devices
- Use is_device_cuda(device) + torch.version.cuda instead of
  the global is_nvidia()

Fixes #4589, relates to #4577, #12405

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 22:56:42 +01:00
..
folder_path_test.py Allow changing folder_paths.base_path via command line argument. (#6600) 2025-01-29 08:06:28 -05:00
model_detection_test.py Native LongCat-Image implementation (#12597) 2026-02-27 23:04:34 -05:00
model_management_test.py fix: per-device fp8/nvfp4 compute detection for multi-GPU setups 2026-03-14 22:56:42 +01:00