mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-03-17 15:15:00 +08:00
supports_fp8_compute() and supports_nvfp4_compute() used the global is_nvidia() check which ignores the device argument, then defaulted to cuda:0 when device was None. In heterogeneous multi-GPU setups (e.g. RTX 5070 + RTX 3090 Ti) this causes the wrong GPU's compute capability to be checked, incorrectly disabling fp8 on capable devices. Replace the global is_nvidia() gate with per-device checks: - Default device=None to get_torch_device() explicitly - Early-return False for CPU/MPS devices - Use is_device_cuda(device) + torch.version.cuda instead of the global is_nvidia() Fixes #4589, relates to #4577, #12405 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| app_test | ||
| assets_test | ||
| comfy_api_test | ||
| comfy_extras_test | ||
| comfy_quant | ||
| comfy_test | ||
| execution_test | ||
| folder_paths_test | ||
| prompt_server_test | ||
| seeder_test | ||
| server/utils | ||
| server_test | ||
| utils | ||
| feature_flags_test.py | ||
| README.md | ||
| requirements.txt | ||
| websocket_feature_flags_test.py | ||
Pytest Unit Tests
Install test dependencies
pip install -r tests-unit/requirements.txt
Run tests
pytest tests-unit/