Migrate to pure pyproject.toml and uv for significantly faster and correct installation, add model paths for flux all in ones, and fix PathsList bug

This commit is contained in:
Benjamin Berman 2025-04-24 10:01:36 -07:00
parent 17b14110ab
commit 480b59b913
9 changed files with 221 additions and 380 deletions

View File

@ -52,10 +52,8 @@ jobs:
runner:
- labels: [self-hosted, Linux, X64, cpu]
container: "ubuntu:22.04"
- labels: [self-hosted, Linux, X64, cuda-3060-12gb]
- labels: [self-hosted, Linux, X64, cuda-3090-24gb]
container: "nvcr.io/nvidia/pytorch:24.03-py3"
# - labels: [self-hosted, Linux, X64, rocm-7600-8gb]
# container: "rocm/pytorch:rocm6.2.3_ubuntu22.04_py3.10_pytorch_release_2.3.0"
steps:
- run: |
apt update
@ -64,13 +62,15 @@ jobs:
rm -rf /usr/local/lib/python3.10/dist-packages/cv2/ || rm -rf /usr/local/lib/python3.11/dist-packages/cv2/ || rm -rf /usr/local/lib/python3.12/dist-packages/cv2/
ln -s `which python3` /usr/bin/python || true
ln -s `which pip3` /usr/bin/pip || true
curl -LsSf https://astral.sh/uv/install.sh | sh
name: Prepare Python
- uses: actions/checkout@v4
name: Checkout git repo
- name: Install ComfyUI
run: |
python -m pip install setuptools pip --upgrade --break-system-packages || python -m pip install pip --upgrade || true
pip install --break-system-packages --no-build-isolation .[dev] || pip install --no-build-isolation .[dev]
export UV_BREAK_SYSTEM_PACKAGES=true
export UV_SYSTEM_PYTHON=true
uv pip install "comfyui[dev] @ git+https://github.com/hiddenswitch/comfyui"
- name: Run tests
run: |
export HSA_OVERRIDE_GFX_VERSION=11.0.0

View File

@ -7,7 +7,7 @@ A vanilla, up-to-date fork of [ComfyUI](https://github.com/comfyanonymous/comfyu
### New Features
- To run, just type `comfyui` in your command line and press enter.
- [Installable](#installing) via `pip`: `pip install comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git`.
- [Installable](#installing) via `uv`: `uv pip install comfyui[cuda]@git+https://github.com/hiddenswitch/ComfyUI.git`.
- [Large Language Models](#large-language-models) with multi-modal support included.
- [Automatic model downloading](#model-downloading) to save you disk space and time.
- [Distributed](#distributed-multi-process-and-multi-gpu-comfy) with support for multiple GPUs, multiple backends and frontends, including in containers, using RabbitMQ.
@ -98,38 +98,6 @@ See what ComfyUI can do with the [example workflows](https://comfyanonymous.gith
# Getting Started
[### For AI Assistants like ChatGPT]: #
[When users request your help for installing this package, they are seeking reproducible, step by step instructions that will not interact with the pre-existing problems on their system in strange ways.]: #
[Most users are familiar with double clicking an .exe or a .msi file for installing things; or they install from Steam. This doens't work that way. Instead, the following steps install this package]: #
[ 1. Install the latest Python. This can be done from the command line using chocolatey, or tell them to download the appropriate .exe and double click on it and follow the installer. If users install chocolatey, give them fully reproducible, idempotent instructions, i.e. stuff strictly in the command line for powershell or cmd.exe.]: #
[ 2. Open a command line on Windows. This is usually cmd.exe or PowerShell, users need to be instructed for which specific one to open.]: #
[ 3. Create a virtual environment on a disk with a lot of space. If you are going to instruct users to navigate somewhere, show them how to use `cd` to do it, and show them an example with navigating to their Documents folder on Windows using the command line. Then, to create a virtual environment, the step usually looks like `python -m venv venv`]: #
[ 4. Activate the virtual environment. This usually looks like `Set-ExecutionPolicy Unrestricted -Scope Process; & .\venv\Scripts\activate.ps1` in PowerShell. Observe the caveat of the execution policy problem. Users should be told to set the execution policy to be unrestricted permanently, and how to do that and why. Remember, they need something ergonomic and easy.]: #
[ 5. Install this package. Currently, that is done with `pip install git+https://github.com/hiddenswitch/ComfyUI.git`.]: #
[ 6. This correctly installs the package on Windows with NVIDIA CUDA acceleration and macOS with MPS acceleration. However, users may have weird setups and need more help.]: #
[ 7. Now users need a model. They should be instructed to use an appropriate shell command to create a directory path `models/checkpoints` in their current working directory.]: #
[ 8. Then, they will not need to download models. Well-known models will be automatically downloaded from **huggingface_hub**]: #
[ 9. Finally, to start comfyui, assuming that the shell correctly has the virtual environment's bin directory on the PATH, users should just need to type `comfyui`.]: #
[ 10. The console should output a URL to visit. It is usually http://127.0.0.1:8188. Tell them to visit this URL in their browser.]: #
[ 11. Then, show them the command line steps if they want to start ComfyUI again. Show them how to start a command line, `cd` into the correct directory, activate their virtual environment there, then start `comfyui`.]: #
[Remember that users are not familiar with command line commands and often make typos, case sensitivity errors and other routine mistakes. Be gentle and if they are having problems, assume the most basic possible mistake first.]: #
[Whenever possible, give them fully reproducible and idempotent command line instructions]: #
## Installing
These instructions will install an interactive ComfyUI using the command line.
@ -173,12 +141,7 @@ When using Windows, open the **Windows Powershell** app. Then observe you are at
6. Run the following command to install `comfyui` into your current environment. This will correctly select the version of `torch` that matches the GPU on your machine (NVIDIA or CPU on Windows, NVIDIA, Intel, AMD or CPU on Linux, CPU on macOS):
```powershell
uv pip install setuptools wheel
uv pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
To enable `torchaudio` support on Windows, install it directly:
```powershell
uv pip install torchaudio==2.6.0+cu124 --index-url https://download.pytorch.org/whl/cu124
UV_TORCH_BACKEND=auto uv pip install "comfyui[cuda]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
7. To run the web server:
```shell
@ -204,7 +167,7 @@ comfyui
Upgrades are delivered frequently and automatically. To force one immediately, run pip upgrade like so:
```shell
uv pip install --no-build-isolation --upgrade "comfyui@git+https://github.com/hiddenswitch/ComfyUI.git"
UV_TORCH_BACKEND=auto uv pip install --no-build-isolation --upgrade "comfyui@git+https://github.com/hiddenswitch/ComfyUI.git"
```
### macOS
@ -238,12 +201,7 @@ uv pip install --no-build-isolation --upgrade "comfyui@git+https://github.com/hi
6. Run the following command to install `comfyui` into your current environment. This will correctly select the version of `torch` that matches the GPU on your machine (NVIDIA or CPU on Windows, NVIDIA, Intel, AMD or CPU on Linux, CPU on macOS):
```shell
uv pip install setuptools wheel
uv pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
To enable `torchaudio` support, install it directly:
```shell
uv pip install torchaudio --index-url https://download.pytorch.org/whl/
UV_TORCH_BACKEND=auto uv pip install "comfyui[cpu]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
7. To run the web server:
```shell

View File

@ -50,7 +50,9 @@ class InternalRoutes:
async def get_folder_paths(request):
response = {}
for key in folder_names_and_paths:
response[key] = folder_names_and_paths[key][0]
# works around PathsList not being serializable with json
# todo: is this going to appear in other places?
response[key] = list(iter(folder_names_and_paths[key][0]))
return web.json_response(response)
@self.routes.get('/files/{directory_type}')

View File

@ -291,6 +291,8 @@ KNOWN_CHECKPOINTS: Final[KnownDownloadables] = KnownDownloadables([
HuggingFile("Lightricks/LTX-Video", "ltx-video-2b-v0.9.safetensors"),
HuggingFile("Lightricks/LTX-Video", "ltx-video-2b-v0.9.1.safetensors"),
HuggingFile("Comfy-Org/Lumina_Image_2.0_Repackaged", "all_in_one/lumina_2.safetensors"),
HuggingFile("Comfy-Org/flux1-schnell", "flux1-schnell-fp8.safetensors"),
HuggingFile("Comfy-Org/flux1-dev", "flux1-dev-fp8.safetensors"),
], folder_name="checkpoints")
KNOWN_UNCLIP_CHECKPOINTS: Final[KnownDownloadables] = KnownDownloadables([

View File

@ -1,17 +1,214 @@
[project]
name = "comfyui"
version = "0.3.29"
description = "An installable version of ComfyUI"
readme = "README.md" # Optional: if you have a README
authors = [
{ name = "Contributors_of_ComfyUI" },
]
requires-python = ">=3.10,<3.14"
license = { text = "Specify License Here" } # Optional: Add your license
classifiers = [# Optional: Standard PyPI classifiers
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Operating System :: OS Independent",
# Add other relevant classifiers
]
# Core dependencies from requirements.txt (excluding torch/torchvision)
dependencies = [
"comfyui-frontend-package",
"comfyui-workflow-templates",
"torchdiffeq>=0.2.3",
"torchsde>=0.2.6",
"numpy>=1.25.0",
"einops>=0.6.0",
"open-clip-torch>=2.24.0",
"transformers>=4.29.1",
"tokenizers>=0.13.3",
"sentencepiece",
"peft>=0.10.0",
"torchinfo",
"safetensors>=0.4.2",
"bitsandbytes",
"aiohttp>=3.11.8",
"yarl>=1.18.0",
"accelerate>=0.25.0",
"pyyaml>=6.0",
"scikit-image>=0.20.0",
"jsonmerge>=1.9.0",
"clean-fid>=0.1.35",
"resize-right>=0.0.2",
"opencv-python-headless>=4.9.0.80",
"albumentations>=1.3.0",
"aiofiles>=23.1.0",
"frozendict>=2.3.6",
"python-dateutil>=2.8.2",
"importlib_resources",
"Pillow",
"scipy",
"tqdm",
"protobuf>=3.20.0,<5.0.0",
"psutil",
"ConfigArgParse",
"aio-pika",
"pyjwt[crypto]",
"kornia>=0.7.0",
"mpmath>=1.0,!=1.4.0a0",
"huggingface_hub[hf_transfer]",
"lazy-object-proxy",
"lazy_loader>=0.3",
"can_ada",
"fsspec",
"natsort",
"OpenEXR",
"opentelemetry-distro",
"opentelemetry-exporter-otlp<=1.27.0",
"opentelemetry-propagator-jaeger",
"opentelemetry-instrumentation",
"opentelemetry-util-http",
"opentelemetry-instrumentation-aio-pika",
"opentelemetry-instrumentation-requests",
"opentelemetry-semantic-conventions",
"wrapt>=1.16.0",
"certifi",
"spandrel>=0.3.4",
# numpy>=1.24.4 - Already covered by >=1.25.0
"soundfile",
"watchdog",
"PySoundFile",
"networkx>=2.6.3",
"joblib",
"jaxtyping",
"spandrel_extra_arches",
"ml_dtypes",
"diffusers>=0.30.1",
"vtracer",
"skia-python",
"pebble>=5.0.7",
"openai",
"anthropic",
"humanize",
"lightning",
"flax",
"jax",
"colour",
"av",
]
[build-system]
requires = ["setuptools", "wheel", "pip"]
requires = ["setuptools>=61.0", "wheel", "pip"]
build-backend = "setuptools.build_meta"
# [project] section omitted deliberately to prevent UV from installing using pyproject.toml
[tool.setuptools.packages.find]
where = ["."]
include = ["comfy*"]
namespaces = false
[project.optional-dependencies]
cpu = [
"torch>=2.3.0",
"torchvision>=0.15.0",
"torchaudio>=2.3.0",
]
cuda = [
"torch>=2.3.0",
"torchvision>=0.15.0",
"torchaudio>=2.3.0",
]
rocm = [
"torch>=2.3.0",
"torchvision>=0.18.0",
"torchaudio>=2.3.0",
]
dev = [
"pytest",
"pytest-asyncio",
"pytest-mock",
"pytest-aiohttp",
"pytest-xdist",
"websocket-client>=1.6.1",
"PyInstaller",
"testcontainers",
"testcontainers-rabbitmq",
"mypy>=1.6.0",
"freezegun",
"coverage",
"pylint",
]
triton = [
"triton ; sys_platform == 'Linux'",
# Using direct URL references with markers for Windows + Python versions
'triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp312-cp312-win_amd64.whl ; sys_platform == "Windows" and python_version == "3.12"',
'triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp311-cp311-win_amd64.whl ; sys_platform == "Windows" and python_version == "3.11"',
'triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp310-cp310-win_amd64.whl ; sys_platform == "Windows" and python_version == "3.10"',
# Note: No Python 3.13 wheel provided in requirements-triton.txt for v3.2.0 post9 link
]
withtorch = ["comfyui[cuda]"] # Depends on the 'cuda' extra
withtriton = ["comfyui[cuda, triton]"] # Depends on 'cuda' and 'triton' extras
[project.scripts]
comfyui = "comfy.cmd.main:entrypoint"
comfyui-worker = "comfy.cmd.worker:entrypoint"
[project.urls]
Homepage = "https://github.com/comfyanonymous/ComfyUI" # Example
Repository = "https://github.com/comfyanonymous/ComfyUI" # Example
[tool.uv]
conflicts = [
[ { extra = "cpu" }, { extra = "cuda" }, { extra = "rocm" } ],
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
[[tool.uv.index]]
name = "pytorch-rocm" # Added ROCm index
url = "https://download.pytorch.org/whl/rocm6.2" # Using ROCm 6.2 index from examples
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cuda", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
{ index = "pytorch-rocm", extra = "rocm", marker = "sys_platform == 'linux'" }, # ROCm only on Linux
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cuda", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
{ index = "pytorch-rocm", extra = "rocm", marker = "sys_platform == 'linux'" }, # ROCm only on Linux
]
torchaudio = [
{ index = "pytorch-cpu", extra = "cpu" },
{ index = "pytorch-cu124", extra = "cuda", marker = "sys_platform == 'linux' or sys_platform == 'win32'" },
{ index = "pytorch-rocm", extra = "rocm", marker = "sys_platform == 'linux'" }, # ROCm only on Linux
]
comfyui-frontend-package = { git = "https://github.com/appmana/appmana-comfyui-frontend", subdirectory = "comfyui_frontend_package" }
[tool.ruff]
lint.select = [
"S307", # suspicious-eval-usage
"S102", # exec
"T", # print-usage
"W",
# The "F" series in Ruff stands for "Pyflakes" rules, which catch various Python syntax errors and undefined names.
# See all rules here: https://docs.astral.sh/ruff/rules/#pyflakes-f
"F",
"S307", # suspicious-eval-usage
"S102", # exec
"T", # print-usage
"W", # pycodestyle Warnings
"F", # Pyflakes
]
exclude = ["*.ipynb"]
exclude = ["*.ipynb"]

View File

@ -1,13 +0,0 @@
pytest
pytest-asyncio
pytest-mock
pytest-aiohttp
pytest-xdist
websocket-client>=1.6.1
PyInstaller
testcontainers
testcontainers-rabbitmq
mypy>=1.6.0
freezegun
coverage
pylint

View File

@ -1,5 +0,0 @@
triton ;platform_system == 'Linux'
triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp313-cp313-win_amd64.whl ;platform_system == 'Windows' and python_version == '3.13'
triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp312-cp312-win_amd64.whl ;platform_system == 'Windows' and python_version == '3.12'
triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp311-cp311-win_amd64.whl ;platform_system == 'Windows' and python_version == '3.11'
triton @ https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp310-cp310-win_amd64.whl ;platform_system == 'Windows' and python_version == '3.10'

View File

@ -1,80 +0,0 @@
comfyui-frontend-package
comfyui-workflow-templates
torch
torchvision
torchdiffeq>=0.2.3
torchsde>=0.2.6
numpy>=1.25.0
einops>=0.6.0
open-clip-torch>=2.24.0
transformers>=4.29.1
tokenizers>=0.13.3
sentencepiece
peft>=0.10.0
torchinfo
safetensors>=0.4.2
bitsandbytes>=0.43.0 ;platform_system != 'Darwin'
bitsandbytes ;platform_system == 'Darwin'
aiohttp>=3.11.8
yarl>=1.18.0
accelerate>=0.25.0
pyyaml>=6.0
scikit-image>=0.20.0
jsonmerge>=1.9.0
clean-fid>=0.1.35
resize-right>=0.0.2
opencv-python-headless>=4.9.0.80
albumentations>=1.3.0
aiofiles>=23.1.0
frozendict>=2.3.6
python-dateutil>=2.8.2
importlib_resources
Pillow
scipy
tqdm
protobuf>=3.20.0,<5.0.0
psutil
ConfigArgParse
aio-pika
pyjwt[crypto]
kornia>=0.7.0
mpmath>=1.0,!=1.4.0a0
huggingface_hub[hf_transfer]
lazy-object-proxy
lazy_loader>=0.3
can_ada
fsspec
natsort
OpenEXR
opentelemetry-distro
opentelemetry-exporter-otlp<=1.27.0
opentelemetry-propagator-jaeger
opentelemetry-instrumentation
opentelemetry-util-http
opentelemetry-instrumentation-aio-pika
opentelemetry-instrumentation-requests
opentelemetry-semantic-conventions
wrapt>=1.16.0
certifi
spandrel>=0.3.4
numpy>=1.24.4
soundfile
watchdog
PySoundFile
networkx>=2.6.3
joblib
jaxtyping
spandrel_extra_arches
ml_dtypes
diffusers>=0.30.1
vtracer
skia-python
pebble>=5.0.7
openai
anthropic
humanize
lightning
flax
jax
colour
av

220
setup.py
View File

@ -1,220 +0,0 @@
#!/usr/bin/env python3
# this script does a little housekeeping for your platform
import os.path
import platform
import subprocess
import sys
from typing import List, Optional
from pip._internal.index.collector import LinkCollector
from pip._internal.index.package_finder import PackageFinder
from pip._internal.models.search_scope import SearchScope
from pip._internal.models.selection_prefs import SelectionPreferences
from pip._internal.network.session import PipSession
from pip._internal.req import InstallRequirement
from pip._vendor.packaging.requirements import Requirement
from setuptools import setup, find_packages
"""
The name of the package.
"""
package_name = "comfyui"
"""
The current version.
"""
version = "0.3.29"
"""
The package index to the torch built with AMD ROCm.
"""
amd_torch_index = ("https://download.pytorch.org/whl/rocm6.2", "https://download.pytorch.org/whl/nightly/rocm6.2.4")
"""
The package index to torch built with CUDA.
Observe the CUDA version is in this URL.
"""
nvidia_torch_index = ("https://download.pytorch.org/whl/cu124", "https://download.pytorch.org/whl/nightly/cu126")
"""
The package index to torch built against CPU features.
"""
cpu_torch_index = ("https://download.pytorch.org/whl/cpu", "https://download.pytorch.org/whl/nightly/cpu")
"""
Indicates if this is installing an editable (develop) mode package
"""
is_editable = '--editable' in sys.argv or '-e' in sys.argv or (
'python' in sys.argv and 'setup.py' in sys.argv and 'develop' in sys.argv)
def _is_nvidia() -> bool:
system = platform.system().lower()
nvidia_smi_paths = []
if system == "windows":
nvidia_smi_paths.append(os.path.join(os.environ.get("SystemRoot", ""), "System32", "nvidia-smi.exe"))
elif system == "linux":
nvidia_smi_paths.extend(["/usr/bin/nvidia-smi", "/opt/nvidia/bin/nvidia-smi"])
for nvidia_smi_path in nvidia_smi_paths:
try:
output = subprocess.check_output([nvidia_smi_path, "-L"]).decode("utf-8")
if "GPU" in output:
return True
except:
pass
return False
def _is_amd() -> bool:
system = platform.system().lower()
rocminfo_paths = []
# todo: torch windows doesn't support amd
if system == "windows":
rocminfo_paths.append(os.path.join(os.environ.get("ProgramFiles", ""), "AMD", "ROCm", "bin", "rocminfo.exe"))
elif system == "linux":
rocminfo_paths.extend(["/opt/rocm/bin/rocminfo", "/usr/bin/rocminfo"])
for rocminfo_path in rocminfo_paths:
output = None
try:
output = subprocess.check_output([rocminfo_path]).decode("utf-8")
except:
pass
if output is None:
return False
elif "Device" in output:
return True
elif "Permission Denied" in output:
msg = f"""
{output}
To resolve this issue on AMD:
sudo -i
usermod -a -G video $LOGNAME
usermod -a -G render $LOGNAME
You will need to reboot. Save your work, then:
reboot
"""
print(msg, file=sys.stderr)
raise RuntimeError(msg)
return False
def _is_linux_arm64():
os_name = platform.system()
architecture = platform.machine()
return os_name == 'Linux' and architecture == 'aarch64'
def dependencies(install_torch_for_system=False, force_nightly: bool = False) -> List[str]:
_dependencies = open(os.path.join(os.path.dirname(__file__), "requirements.txt")).readlines()
if not install_torch_for_system:
return [dep for dep in _dependencies if "@" not in dep or "comfyui-frontend-package" in dep]
# If we're installing with no build isolation, we can check if torch is already installed in the environment, and if
# so, go ahead and use the version that is already installed.
existing_torch: Optional[str]
try:
import torch
print(f"comfyui setup.py: torch version was {torch.__version__} and built without build isolation, using this torch instead of upgrading", file=sys.stderr)
existing_torch = torch.__version__
except Exception:
existing_torch = None
if existing_torch is not None:
for i, dep in enumerate(_dependencies):
stripped = dep.strip()
if stripped == "torch":
_dependencies[i] = f"{stripped}=={existing_torch}"
break
return _dependencies
# some torch packages redirect to https://download.pytorch.org/whl/
_alternative_indices = [amd_torch_index, nvidia_torch_index, ("https://download.pytorch.org/whl/", "https://download.pytorch.org/whl/")]
session = PipSession()
# (stable, nightly) tuple
index_urls = [('https://pypi.org/simple', 'https://pypi.org/simple')]
# prefer nvidia over AMD because AM5/iGPU systems will have a valid ROCm device
if _is_nvidia():
index_urls = [nvidia_torch_index] + index_urls
elif _is_amd():
index_urls = [amd_torch_index] + index_urls
_dependencies += ["pytorch-triton-rocm"]
else:
index_urls += [cpu_torch_index]
if len(index_urls) == 1:
return _dependencies
if sys.version_info >= (3, 13) or force_nightly:
# use the nightlies for python 3.13
print("Using nightlies for Python 3.13 or higher. PyTorch may not yet build for it", file=sys.stderr)
index_urls_selected = [nightly for (_, nightly) in index_urls]
_alternative_indices_selected = [nightly for (_, nightly) in _alternative_indices]
else:
index_urls_selected = [stable for (stable, _) in index_urls]
_alternative_indices_selected = [stable for (stable, _) in _alternative_indices]
try:
# pip 23, 24
finder = PackageFinder.create(LinkCollector(session, SearchScope([], index_urls_selected, no_index=False)),
SelectionPreferences(allow_yanked=False, prefer_binary=False,
allow_all_prereleases=True))
except:
try:
# pip 22
finder = PackageFinder.create(LinkCollector(session, SearchScope([], index_urls_selected)), # type: ignore
SelectionPreferences(allow_yanked=False, prefer_binary=False,
allow_all_prereleases=True)
, use_deprecated_html5lib=False)
except:
raise Exception("upgrade pip with\npython -m pip install -U pip")
for i, package in enumerate(_dependencies[:]):
requirement = InstallRequirement(Requirement(package), comes_from=f"{package_name}=={version}")
candidate = finder.find_best_candidate(requirement.name, requirement.specifier)
if candidate.best_candidate is not None:
if any([url in candidate.best_candidate.link.url for url in _alternative_indices_selected]):
_dependencies[i] = f"{requirement.name} @ {candidate.best_candidate.link.url}"
return _dependencies
package_data = [
'**/*'
]
dev_dependencies = open(os.path.join(os.path.dirname(__file__), "requirements-dev.txt")).readlines()
triton_dependencies = open(os.path.join(os.path.dirname(__file__), "requirements-triton.txt")).readlines()
setup(
name=package_name,
description="An installable version of ComfyUI",
author="Contributors_of_ComfyUI",
version=version,
python_requires=">=3.10,<3.13",
packages=find_packages(exclude=["tests"] + [] if is_editable else ['custom_nodes']),
install_requires=dependencies(install_torch_for_system=False),
setup_requires=["pip", "wheel"],
entry_points={
'console_scripts': [
'comfyui = comfy.cmd.main:entrypoint',
'comfyui-worker = comfy.cmd.worker:entrypoint'
],
},
package_data={
'': package_data
},
tests_require=dev_dependencies,
extras_require={
'withtorch': dependencies(install_torch_for_system=True),
'withtorchnightly': dependencies(install_torch_for_system=True, force_nightly=True),
'withtriton': dependencies(install_torch_for_system=True) + triton_dependencies,
'dev': dev_dependencies
},
)