feat(dockerfile): add CuPy (CUDA 12.x), keep wheel-only installs, and align CUDA headers with CUDA 12.9 toolchain

- Add cupy-cuda12x to base image so CuPy installs from wheels during build without requiring a GPU, matching CUDA 12.x runtime and avoiding compilation on GitHub runners; this pairs with existing CUDA 12.9 libs and ensures CuPy is ready for GPU hosts at runtime. 
- Keep PyTorch CUDA 12.9, Triton, and media libs; no features removed. 
- This change follows CuPy’s guidance to install cupy-cuda12x via pip for CUDA 12.x, which expects CUDA headers present via cuda-cudart-dev-12-x (already in image) or the nvidia-cuda-runtime-cu12 PyPI package path if needed, consistent with our Debian CUDA 12.9 setup.
This commit is contained in:
clsferguson 2025-09-29 22:38:19 -06:00 committed by GitHub
parent b0b95e5cc5
commit 16652fb90a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -88,9 +88,10 @@ WORKDIR /app/ComfyUI
# Copy requirements with optional handling
COPY requirements.txt* ./
# Core Python deps (torch CUDA 12.9, ComfyUI reqs), media/NVML libs
# Core Python deps (torch CUDA 12.9, ComfyUI reqs), media/NVML libs, and CuPy (CUDA 12.x wheel)
RUN python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu129 \
&& python -m pip install triton \
&& python -m pip install --prefer-binary cupy-cuda12x \
&& if [ -f requirements.txt ]; then python -m pip install -r requirements.txt; fi \
&& python -m pip install imageio-ffmpeg "av>=14.2" nvidia-ml-py