mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-01-07 21:00:49 +08:00
update docs
This commit is contained in:
parent
c2270bbc05
commit
55b187768a
49
README.md
49
README.md
@ -2,6 +2,25 @@
|
||||
|
||||
A vanilla, up-to-date fork of [ComfyUI](https://github.com/comfyanonymous/comfyui) intended for long term support (LTS) from [AppMana](https://appmana.com) and [Hidden Switch](https://hiddenswitch.com).
|
||||
|
||||
## Key Features and Differences
|
||||
|
||||
This LTS fork enhances vanilla ComfyUI with enterprise-grade features, focusing on stability, ease of deployment, and scalability while maintaining full compatibility.
|
||||
|
||||
### Deployment and Installation
|
||||
- **Pip and UV Installable:** Install via `pip` or `uv` directly from GitHub. No manual cloning required for users.
|
||||
- **Automatic Model Downloading:** Missing models (e.g., Stable Diffusion, FLUX, LLMs) are downloaded on-demand from Hugging Face or CivitAI.
|
||||
- **Docker and Containers:** First-class support for Docker and Kubernetes with optimized containers for NVIDIA and AMD.
|
||||
|
||||
### Scalability and Performance
|
||||
- **Distributed Inference:** Run scalable inference clusters with multiple workers and frontends using RabbitMQ.
|
||||
- **Embedded Mode:** Use ComfyUI as a Python library (`import comfy`) inside your own applications without the web server.
|
||||
- **LTS Custom Nodes:** A curated set of "Installable" custom nodes (ControlNet, AnimateDiff, IPAdapter) optimized for this fork.
|
||||
|
||||
### Enhanced Capabilities
|
||||
- **LLM Support:** Native support for Large Language Models (LLaMA, Phi-3, etc.) and multi-modal workflows.
|
||||
- **API and Configuration:** Enhanced API endpoints and extensive configuration options via CLI args, env vars, and config files.
|
||||
- **Tests:** Automated test suite ensuring stability for new features.
|
||||
|
||||
## Quickstart (Linux)
|
||||
|
||||
### UI Users
|
||||
@ -60,17 +79,21 @@ For developers contributing to the codebase or building on top of it.
|
||||
|
||||
Full documentation is available in [docs/index.md](docs/index.md).
|
||||
|
||||
### Table of Contents
|
||||
### Core
|
||||
- [Installation & Getting Started](docs/installing.md)
|
||||
- [Hardware Compatibility](docs/compatibility.md)
|
||||
- [Configuration](docs/configuration.md)
|
||||
- [Troubleshooting](docs/troubleshooting.md)
|
||||
|
||||
- [New Features Compared to Upstream](docs/index.md#new-features-compared-to-upstream)
|
||||
- [Getting Started](docs/index.md#getting-started)
|
||||
- [Installing](docs/index.md#installing)
|
||||
- [Model Downloading](docs/index.md#model-downloading)
|
||||
- [LTS Custom Nodes](docs/index.md#lts-custom-nodes)
|
||||
- [Large Language Models](docs/index.md#large-language-models)
|
||||
- [Video Workflows](docs/index.md#video-workflows)
|
||||
- [Custom Nodes](docs/index.md#custom-nodes)
|
||||
- [Configuration](docs/index.md#configuration)
|
||||
- [API Usage](docs/index.md#using-comfyui-as-an-api--programmatically)
|
||||
- [Distributed / Multi-GPU](docs/index.md#distributed-multi-process-and-multi-gpu-comfy)
|
||||
- [Docker Compose](docs/index.md#docker-compose)
|
||||
### Features & Workflows
|
||||
- [Large Language Models](docs/llm.md)
|
||||
- [Video Workflows](docs/video.md) (AnimateDiff, SageAttention, etc.)
|
||||
- [Other Features](docs/other_features.md) (SVG, Ideogram)
|
||||
|
||||
### Extending ComfyUI
|
||||
- [Custom Nodes](docs/custom_nodes.md) (Installing & Authoring)
|
||||
- [API Usage](docs/api.md) (Python, REST, Embedded)
|
||||
|
||||
### Deployment
|
||||
- [Distributed / Multi-GPU](docs/distributed.md)
|
||||
- [Docker & Containers](docs/docker.md)
|
||||
|
||||
84
docs/compatibility.md
Normal file
84
docs/compatibility.md
Normal file
@ -0,0 +1,84 @@
|
||||
# Hardware & Software Compatibility
|
||||
|
||||
This project is rigorously tested on specific hardware and software configurations to ensure stability and performance.
|
||||
|
||||
## Compatibility Matrix
|
||||
|
||||
### Linux
|
||||
|
||||
| Hardware | Python | CUDA / ROCm | PyTorch | Torch-TensorRT | Container Image | Status |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| **NVIDIA RTX 3090** (24GB) | 3.12 | 12.9.1 | Latest | 2.8.0a0 | `nvcr.io/nvidia/pytorch:25.06-py3` | ✅ Automated |
|
||||
| **NVIDIA RTX 3090** (24GB) | 3.12 | 12.8.1 | Latest | 2.7.0a0 | `nvcr.io/nvidia/pytorch:25.03-py3` (LTS) | ✅ Automated |
|
||||
| **NVIDIA RTX 3090** (24GB) | 3.10 | 12.6.2 | Latest | 2.5.0a0 | `nvcr.io/nvidia/pytorch:24.10-py3` | ✅ Automated |
|
||||
| **NVIDIA RTX 3090** (24GB) | 3.10 | 12.3.2 | Latest | 2.2.0a0 | `nvcr.io/nvidia/pytorch:23.12-py3` | ✅ Automated |
|
||||
| **AMD RX 7600** (8GB) | 3.12 | ROCm 7.0 | 2.7.1 (Nightly) | N/A | `rocm/pytorch:rocm7.0_ubuntu24.04_py3.12_pytorch_release_2.7.1` | ✅ Automated |
|
||||
|
||||
**AMD Note:** Automated testing for AMD uses a specific nightly build of PyTorch 2.7.1 optimized for RDNA 3 (`gfx110X`) from `https://rocm.nightlies.amd.com/v2/gfx110X-dgpu/`.
|
||||
|
||||
### macOS
|
||||
|
||||
| Hardware | Python | Acceleration | Status |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **Apple Silicon** (M1/M2/M3) | 3.12 | MPS (Metal Performance Shaders) | ✅ Automated (macOS 14 Runner) |
|
||||
|
||||
### Windows
|
||||
|
||||
Windows support is manually verified.
|
||||
|
||||
| Hardware | Python | CUDA | Drivers | PyTorch | Status |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| **NVIDIA RTX 3090** | 3.10 - 3.12 | 12.8 | 560+ | 2.7, 2.8, 2.9 | ✅ Manually Verified |
|
||||
|
||||
## AMD ROCm Support for Other Architectures
|
||||
|
||||
You can install ComfyUI with acceleration on other AMD architectures by pointing `uv` to the correct package index.
|
||||
|
||||
### Architecture Table
|
||||
|
||||
Find your GPU in the table below to determine the correct index URL.
|
||||
|
||||
| Series | Models (Examples) | Architecture | Index URL |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **RX 9000** | RX 9070 / XT, RX 9060 / XT | RDNA 4 (`gfx1200`, `gfx1201`) | `https://rocm.nightlies.amd.com/v2/gfx120X-all/` |
|
||||
| **RX 7000** | RX 7900 XTX, 7800 XT, 7600 | RDNA 3 (`gfx1100`, `gfx1101`, `gfx1102`) | `https://rocm.nightlies.amd.com/v2/gfx110X-all/` |
|
||||
| **RX 7000 (M)** | Radeon 780M (Laptop), 7700S | RDNA 3 (`gfx1103`) | `https://rocm.nightlies.amd.com/v2/gfx110X-all/` |
|
||||
| **Strix Halo** | Strix Halo iGPU | RDNA 3.5 (`gfx1151`) | `https://rocm.nightlies.amd.com/v2/gfx1151/` |
|
||||
| **Instinct** | MI300A, MI300X | CDNA 3 (`gfx942`) | `https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/` |
|
||||
| **Instinct** | MI350X, MI355X | CDNA 4 (`gfx950`) | `https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/` |
|
||||
|
||||
### Installation Examples
|
||||
|
||||
Use `uv pip install` with the `--index-url` corresponding to your hardware.
|
||||
|
||||
**RX 9000 Series (RDNA 4)**
|
||||
```bash
|
||||
uv pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ --pre torch torchaudio torchvision
|
||||
```
|
||||
|
||||
**RX 7000 Series (RDNA 3)**
|
||||
```bash
|
||||
uv pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ --pre torch torchaudio torchvision
|
||||
```
|
||||
|
||||
**RX 6000 Series (RDNA 2)**
|
||||
```bash
|
||||
uv pip install --torch-backend=auto --pre torch torchvision torchaudio
|
||||
```
|
||||
|
||||
**RX 5000 Series (RDNA 1)**
|
||||
```bash
|
||||
uv pip install --torch-backend=auto --pre torch torchvision torchaudio
|
||||
```
|
||||
|
||||
**Instinct MI300**
|
||||
```bash
|
||||
uv pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ --pre torch torchaudio torchvision
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **NVIDIA:** Automated testing uses official NVIDIA PyTorch containers to ensure compatibility with the latest deep learning stack.
|
||||
- **AMD:** Automated testing targets ROCm 7.0 on RDNA 3 architecture (RX 7000 series).
|
||||
- **macOS:** Tested on macOS 14 runners with Python 3.12 using the `mps` backend for acceleration.
|
||||
- **Windows:** While not part of the automated CI loop, Windows builds are manually verified against recent PyTorch and CUDA versions on standard consumer hardware.
|
||||
@ -2,6 +2,7 @@
|
||||
|
||||
## Core
|
||||
- [Installation & Getting Started](installing.md)
|
||||
- [Hardware Compatibility](compatibility.md)
|
||||
- [Configuration](configuration.md)
|
||||
- [Troubleshooting](troubleshooting.md)
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user