Add Dockerfile. Update README and clarify purpose of this fork. Fix installation with pre-existing torch on Linux. Include container documentation

This commit is contained in:
doctorpangloss 2024-02-28 10:29:08 -08:00
parent cea0bc345d
commit 2c882bb66c
7 changed files with 91 additions and 401 deletions

View File

@ -48,10 +48,10 @@ jobs:
echo If you just want to update normally, close this and run update_comfyui.bat instead. echo If you just want to update normally, close this and run update_comfyui.bat instead.
echo - echo -
pause pause
..\python_embeded\python.exe -s -m pip install --upgrade torch torchvision torchaudio ${{ inputs.xformers }} --extra-index-url https://download.pytorch.org/whl/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2 ..\python_embeded\python.exe -s -m pip install --upgrade torch torchvision ${{ inputs.xformers }} --extra-index-url https://download.pytorch.org/whl/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2
pause" > update_comfyui_and_python_dependencies.bat pause" > update_comfyui_and_python_dependencies.bat
python -m pip wheel --no-cache-dir torch torchvision torchaudio ${{ inputs.xformers }} --extra-index-url https://download.pytorch.org/whl/cu${{ inputs.cu }} -r requirements.txt pygit2 -w ./temp_wheel_dir python -m pip wheel --no-cache-dir torch torchvision ${{ inputs.xformers }} --extra-index-url https://download.pytorch.org/whl/cu${{ inputs.cu }} -r requirements.txt pygit2 -w ./temp_wheel_dir
python -m pip install --no-cache-dir ./temp_wheel_dir/* python -m pip install --no-cache-dir ./temp_wheel_dir/*
echo installed basic echo installed basic
ls -lah temp_wheel_dir ls -lah temp_wheel_dir

View File

@ -49,7 +49,7 @@ jobs:
echo 'import site' >> ./python3${{ inputs.python_minor }}._pth echo 'import site' >> ./python3${{ inputs.python_minor }}._pth
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
./python.exe get-pip.py ./python.exe get-pip.py
python -m pip wheel torch torchvision torchaudio --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2 -w ../temp_wheel_dir python -m pip wheel torch torchvision --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2 -w ../temp_wheel_dir
ls ../temp_wheel_dir ls ../temp_wheel_dir
./python.exe -s -m pip install --pre ../temp_wheel_dir/* ./python.exe -s -m pip install --pre ../temp_wheel_dir/*
sed -i '1i../ComfyUI' ./python3${{ inputs.python_minor }}._pth sed -i '1i../ComfyUI' ./python3${{ inputs.python_minor }}._pth
@ -69,7 +69,7 @@ jobs:
cp -r ComfyUI/.ci/windows_base_files/* ./ cp -r ComfyUI/.ci/windows_base_files/* ./
echo "call update_comfyui.bat nopause echo "call update_comfyui.bat nopause
..\python_embeded\python.exe -s -m pip install --upgrade --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2 ..\python_embeded\python.exe -s -m pip install --upgrade --pre torch torchvision --extra-index-url https://download.pytorch.org/whl/nightly/cu${{ inputs.cu }} -r ../ComfyUI/requirements.txt pygit2
pause" > ./update/update_comfyui_and_python_dependencies.bat pause" > ./update/update_comfyui_and_python_dependencies.bat
cd .. cd ..

5
Dockerfile Normal file
View File

@ -0,0 +1,5 @@
FROM nvcr.io/nvidia/pytorch:24.01-py3
RUN pip install --no-cache --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git
EXPOSE 8188
WORKDIR /workspace
CMD ["comfyui", "--listen"]

120
README.md
View File

@ -1,44 +1,28 @@
ComfyUI ComfyUI Distributed
======= =======
The most powerful and modular stable diffusion GUI and backend.
-----------
![ComfyUI Screenshot](comfyui_screenshot.png)
This UI will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For some workflow examples and see what ComfyUI can do you can check out: A vanilla, up-to-date fork of [ComfyUI](https://github.com/comfyanonymous/comfyui).
### [ComfyUI Examples](https://comfyanonymous.github.io/ComfyUI_examples/) ### New Features
### [Installing ComfyUI](#installing) - Run with `comfyui` in your command line.
- [Installable](#installing) via `pip`: `pip install git+https://github.com/hiddenswitch/ComfyUI.git`.
- [Distributed](#distributed-multi-process-and-multi-gpu-comfy) with supports multiple GPUs, multiple backends and frontends, including in containers, using RabbitMQ.
- [Installable custom nodes](#custom-nodes) via `pip`.
- [New configuration options](#command-line-arguments) for directories, models and metrics.
- [API](#using-comfyui-as-an-api--programmatically) support, using the vanilla ComfyUI API and new API endpoints.
- [Embed](#embedded) ComfyUI as a library inside your Python application. No server or frontend needed.
- [Containers].
- Automated tests for new features.
## Features ### Table of Contents
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Fully supports SD1.x, SD2.x, [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/) and [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/)
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Command line option: ```--lowvram``` to make it work on GPUs with less than 3GB vram (enabled automatically on GPUs with low vram)
- Works even if you don't have a GPU with: ```--cpu``` (slow)
- Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs and CLIP models.
- Embeddings/Textual inversion
- [Loras (regular, locon and loha)](https://comfyanonymous.github.io/ComfyUI_examples/lora/)
- [Hypernetworks](https://comfyanonymous.github.io/ComfyUI_examples/hypernetworks/)
- Loading full workflows (with seeds) from generated PNG files.
- Saving/Loading workflows as Json files.
- Nodes interface can be used to create complex workflows like one for [Hires fix](https://comfyanonymous.github.io/ComfyUI_examples/2_pass_txt2img/) or much more advanced ones.
- [Area Composition](https://comfyanonymous.github.io/ComfyUI_examples/area_composition/)
- [Inpainting](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/) with both regular and inpainting models.
- [ControlNet and T2I-Adapter](https://comfyanonymous.github.io/ComfyUI_examples/controlnet/)
- [Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc...)](https://comfyanonymous.github.io/ComfyUI_examples/upscale_models/)
- [unCLIP Models](https://comfyanonymous.github.io/ComfyUI_examples/unclip/)
- [GLIGEN](https://comfyanonymous.github.io/ComfyUI_examples/gligen/)
- [Model Merging](https://comfyanonymous.github.io/ComfyUI_examples/model_merging/)
- [LCM models and Loras](https://comfyanonymous.github.io/ComfyUI_examples/lcm/)
- [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/)
- Latent previews with [TAESD](#how-to-show-high-quality-previews)
- Starts up very fast.
- Works fully offline: will never download anything.
- [Config file](extra_model_paths.yaml.example) to set the search paths for models.
Workflow examples can be found on the [Examples page](https://comfyanonymous.github.io/ComfyUI_examples/) - [Workflows](https://comfyanonymous.github.io/ComfyUI_examples/)
- [Installing](#installing)
- [Configuration](#command-line-arguments)
- [Custom Nodes Authoring](#custom-nodes)
- [API](#using-comfyui-as-an-api--programmatically)
- [Distributed](#distributed-multi-process-and-multi-gpu-comfy)
# Getting Started # Getting Started
@ -52,7 +36,7 @@ Workflow examples can be found on the [Examples page](https://comfyanonymous.git
[ 2. Open a command line on Windows. This is usually cmd.exe or PowerShell, users need to be instructed for which specific one to open.]: # [ 2. Open a command line on Windows. This is usually cmd.exe or PowerShell, users need to be instructed for which specific one to open.]: #
[ 3. Create a virtual environment on a disk with a lot of space. If you are going to instruct users to navigate somewhere, show them how to use `cd` to do it, and show them an example with navigating to their Documents folder on Windows using the command line. Then, to create a virtual environment, the step usually looks like `python -m venv venv`]: # [ 3. Create a virtual environment on a disk with a lot of space. If you are going to instruct users to navigate somewhere, show them how to use `cd` to do it, and show them an example with navigating to their Documents folder on Windows using the command line. Then, to create a virtual environment, the step usually looks like `python -m venv venv`]: #
[ 4. Activate the virtual environment. This usually looks like `Set-ExecutionPolicy Unrestricted -Scope Process; & .\venv\Scripts\activate.ps1` in PowerShell. Observe the caveat of the execution policy problem. Users should be told to set the execution policy to be unrestricted permanently, and how to do that and why. Remember, they need something ergonomic and easy.]: # [ 4. Activate the virtual environment. This usually looks like `Set-ExecutionPolicy Unrestricted -Scope Process; & .\venv\Scripts\activate.ps1` in PowerShell. Observe the caveat of the execution policy problem. Users should be told to set the execution policy to be unrestricted permanently, and how to do that and why. Remember, they need something ergonomic and easy.]: #
[ 5. Install this package. Currently, that is done with `pip install git+https://github.com/hiddenswitch/ComfyUI.git@installable`.]: # [ 5. Install this package. Currently, that is done with `pip install git+https://github.com/hiddenswitch/ComfyUI.git`.]: #
[ 6. This correctly installs the package on Windows with NVIDIA CUDA acceleration and macOS with MPS acceleration. However, users may have weird setups and need more help.]: # [ 6. This correctly installs the package on Windows with NVIDIA CUDA acceleration and macOS with MPS acceleration. However, users may have weird setups and need more help.]: #
[ 7. Now users need a model. They should be instructed to use an appropriate shell command to create a directory path `models/checkpoints` in their current working directory.]: # [ 7. Now users need a model. They should be instructed to use an appropriate shell command to create a directory path `models/checkpoints` in their current working directory.]: #
[ 8. Then, they must download https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt into the path ./models/checkpoints/v1-5-pruned-emaonly.ckpt]: # [ 8. Then, they must download https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt into the path ./models/checkpoints/v1-5-pruned-emaonly.ckpt]: #
@ -96,11 +80,7 @@ On macOS, install exactly Python 3.11 using `brew`, which you can download from
pip install wheel pip install wheel
pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git pip install --no-build-isolation git+https://github.com/hiddenswitch/ComfyUI.git
``` ```
4. To run the web server: 4. Create the directories you can fill with checkpoints:
```shell
comfyui
```
Create the directories you can fill with checkpoints:
```shell ```shell
comfyui --create-directories comfyui --create-directories
``` ```
@ -111,6 +91,22 @@ On macOS, install exactly Python 3.11 using `brew`, which you can download from
``` ```
You can see all the command line options with hints using `comfyui --help`. You can see all the command line options with hints using `comfyui --help`.
5. Download models. These commands will download the SD1.5 and SDXL models. This assumes you are using the default directories.
```shell
mkdir -p models/checkpoints
curl -L https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors?download=true -o models/checkpoints/v1-5-pruned-emaonly.safetensors
curl -L https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors?download=true -o models/checkpoints/sd_xl_base_1.0.safetensors
curl -L https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors?download=true -o models/checkpoints/sd_xl_refiner_1.0.safetensors
```
6. To run the web server:
```shell
comfyui
```
To make it accessible over the network:
```shell
comfyui --listen
```
## Manual Install (Windows, Linux, macOS) For Development ## Manual Install (Windows, Linux, macOS) For Development
1. Clone this repo: 1. Clone this repo:
@ -168,7 +164,7 @@ On macOS, install exactly Python 3.11 using `brew`, which you can download from
Because the package is installed "editably" with `pip install -e .`, any changes you make to the repository will affect the next launch of `comfy`. In IDEA based editors like PyCharm and IntelliJ, the Relodium plugin supports modifying your custom nodes or similar code while the server is running. Because the package is installed "editably" with `pip install -e .`, any changes you make to the repository will affect the next launch of `comfy`. In IDEA based editors like PyCharm and IntelliJ, the Relodium plugin supports modifying your custom nodes or similar code while the server is running.
## Intel, DirectML and AMD Experimental Support ### Intel, DirectML and AMD Experimental Support
<details> <details>
#### [Intel Arc](https://github.com/comfyanonymous/ComfyUI/discussions/476) #### [Intel Arc](https://github.com/comfyanonymous/ComfyUI/discussions/476)
@ -178,8 +174,8 @@ Because the package is installed "editably" with `pip install -e .`, any changes
Follow the manual installation steps. Then: Follow the manual installation steps. Then:
```shell ```shell
pip uninstall torch torchvision torchaudio pip uninstall torch torchvision
pip install torch torchvision torchaudio pip install torch torchvision
pip install torch-directml pip install torch-directml
``` ```
@ -476,8 +472,7 @@ Ctrl can also be replaced with Cmd instead for macOS users
### Command Line Arguments ### Command Line Arguments
<details> ```
<pre>
usage: comfyui.exe [-h] [-c CONFIG_FILE] usage: comfyui.exe [-h] [-c CONFIG_FILE]
[--write-out-config-file CONFIG_OUTPUT_PATH] [-w CWD] [--write-out-config-file CONFIG_OUTPUT_PATH] [-w CWD]
[-H [IP]] [--port PORT] [--enable-cors-header [ORIGIN]] [-H [IP]] [--port PORT] [--enable-cors-header [ORIGIN]]
@ -679,14 +674,15 @@ config.json or specified via -c). Config file syntax allows: key=value,
flag=true, stuff=[a,b,c] (for details, see syntax at https://goo.gl/R74nmi). flag=true, stuff=[a,b,c] (for details, see syntax at https://goo.gl/R74nmi).
In general, command-line values override environment variables which override In general, command-line values override environment variables which override
config file values which override defaults. config file values which override defaults.
</pre> ```
</details>
# Using ComfyUI as an API / Programmatically # Using ComfyUI as an API / Programmatically
There are multiple ways to use this ComfyUI package to run workflows programmatically: There are multiple ways to use this ComfyUI package to run workflows programmatically:
**Start ComfyUI by creating an ordinary Python object.** This does not create a web server. It runs ComfyUI as a library, like any other package you are familiar with: ### Embedded
Start ComfyUI by creating an ordinary Python object. This does not create a web server. It runs ComfyUI as a library, like any other package you are familiar with:
```python ```python
from comfy.client.embedded_comfy_client import EmbeddedComfyClient from comfy.client.embedded_comfy_client import EmbeddedComfyClient
@ -705,7 +701,9 @@ async with EmbeddedComfyClient() as client:
See [script_examples/basic_api_example.py](script_examples/basic_api_example.py) for a complete example. See [script_examples/basic_api_example.py](script_examples/basic_api_example.py) for a complete example.
**Start ComfyUI as a remote server, then access it via an API.** This requires you to start ComfyUI somewhere. Then access it via a standardized API. ### Remote
Start ComfyUI as a remote server, then access it via an API. This requires you to start ComfyUI somewhere. Then access it via a standardized API.
```python ```python
from comfy.client.aio_client import AsyncRemoteComfyClient from comfy.client.aio_client import AsyncRemoteComfyClient
@ -719,9 +717,13 @@ with open("image.png", "rb") as f:
See [script_examples/remote_api_example.py](script_examples/remote_api_example.py) for a complete example. See [script_examples/remote_api_example.py](script_examples/remote_api_example.py) for a complete example.
**Use a typed, generated API client for your programming language and access ComfyUI server remotely as an API.** You can generate the client from [comfy/api/openapi.yaml](comfy/api/openapi.yaml). ### OpenAPI Spec for Vanilla API, Typed Clients
**Submit jobs directly to a distributed work queue.** This package supports AMQP message queues like RabbitMQ. You can submit workflows to the queue, including from the web using RabbitMQ's STOMP support, and receive realtime progress updates from multiple workers. Continue to the next section for more details. Use a typed, generated API client for your programming language and access ComfyUI server remotely as an API. You can generate the client from [comfy/api/openapi.yaml](comfy/api/openapi.yaml).
### RabbitMQ / AMQP Support
Submit jobs directly to a distributed work queue. This package supports AMQP message queues like RabbitMQ. You can submit workflows to the queue, including from the web using RabbitMQ's STOMP support, and receive realtime progress updates from multiple workers. Continue to the next section for more details.
# Distributed, Multi-Process and Multi-GPU Comfy # Distributed, Multi-Process and Multi-GPU Comfy
@ -802,6 +804,20 @@ This means that workers and frontends do **not** have to have the same argument
Since reading models like large checkpoints over the network can be slow, you can use `--extra-model-paths-config` to specify additional model paths. Or, you can use `--cwd some/path`, where `some/path` is a local directory, and, and mount `some/path/outputs` to a network directory. Since reading models like large checkpoints over the network can be slow, you can use `--extra-model-paths-config` to specify additional model paths. Or, you can use `--cwd some/path`, where `some/path` is a local directory, and, and mount `some/path/outputs` to a network directory.
# Containers
Build the `Dockerfile`:
```shell
docker build . -t hiddenswitch/comfyui
```
To run:
```shell
docker run -it -v ./output:/workspace/output -v ./models:/workspace/models --gpus=all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --rm hiddenswitch/comfyui
```
## Community ## Community
[Chat on Matrix: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org), an alternative to Discord. [Chat on Matrix: #comfyui_space:matrix.org](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org), an alternative to Discord.

View File

@ -1,329 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "aaaaaaaaaa"
},
"source": [
"Git clone the repo and install the requirements. (ignore the pip errors about protobuf)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bbbbbbbbbb"
},
"outputs": [],
"source": [
"#@title Environment Setup\n",
"\n",
"from pathlib import Path\n",
"\n",
"OPTIONS = {}\n",
"\n",
"USE_GOOGLE_DRIVE = False #@param {type:\"boolean\"}\n",
"UPDATE_COMFY_UI = True #@param {type:\"boolean\"}\n",
"WORKSPACE = 'ComfyUI'\n",
"OPTIONS['USE_GOOGLE_DRIVE'] = USE_GOOGLE_DRIVE\n",
"OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI\n",
"\n",
"if OPTIONS['USE_GOOGLE_DRIVE']:\n",
" !echo \"Mounting Google Drive...\"\n",
" %cd /\n",
" \n",
" from google.colab import drive\n",
" drive.mount('/content/drive')\n",
"\n",
" WORKSPACE = \"/content/drive/MyDrive/ComfyUI\"\n",
" %cd /content/drive/MyDrive\n",
"\n",
"![ ! -d $WORKSPACE ] && echo -= Initial setup ComfyUI =- && git clone https://github.com/comfyanonymous/ComfyUI\n",
"%cd $WORKSPACE\n",
"\n",
"if OPTIONS['UPDATE_COMFY_UI']:\n",
" !echo -= Updating ComfyUI =-\n",
" !git pull\n",
"\n",
"!echo -= Install dependencies =-\n",
"!pip install xformers!=0.0.18 -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121 --extra-index-url https://download.pytorch.org/whl/cu118 --extra-index-url https://download.pytorch.org/whl/cu117"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cccccccccc"
},
"source": [
"Download some models/checkpoints/vae or custom comfyui nodes (uncomment the commands for the ones you want)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dddddddddd"
},
"outputs": [],
"source": [
"# Checkpoints\n",
"\n",
"### SDXL\n",
"### I recommend these workflow examples: https://comfyanonymous.github.io/ComfyUI_examples/sdxl/\n",
"\n",
"#!wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors -P ./models/checkpoints/\n",
"#!wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors -P ./models/checkpoints/\n",
"\n",
"# SDXL ReVision\n",
"#!wget -c https://huggingface.co/comfyanonymous/clip_vision_g/resolve/main/clip_vision_g.safetensors -P ./models/clip_vision/\n",
"\n",
"# SD1.5\n",
"!wget -c https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -P ./models/checkpoints/\n",
"\n",
"# SD2\n",
"#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.safetensors -P ./models/checkpoints/\n",
"#!wget -c https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.safetensors -P ./models/checkpoints/\n",
"\n",
"# Some SD1.5 anime style\n",
"#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_hard.safetensors -P ./models/checkpoints/\n",
"#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A1_orangemixs.safetensors -P ./models/checkpoints/\n",
"#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3A3_orangemixs.safetensors -P ./models/checkpoints/\n",
"#!wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3-fp16-pruned.safetensors -P ./models/checkpoints/\n",
"\n",
"# Waifu Diffusion 1.5 (anime style SD2.x 768-v)\n",
"#!wget -c https://huggingface.co/waifu-diffusion/wd-1-5-beta3/resolve/main/wd-illusion-fp16.safetensors -P ./models/checkpoints/\n",
"\n",
"\n",
"# unCLIP models\n",
"#!wget -c https://huggingface.co/comfyanonymous/illuminatiDiffusionV1_v11_unCLIP/resolve/main/illuminatiDiffusionV1_v11-unclip-h-fp16.safetensors -P ./models/checkpoints/\n",
"#!wget -c https://huggingface.co/comfyanonymous/wd-1.5-beta2_unCLIP/resolve/main/wd-1-5-beta2-aesthetic-unclip-h-fp16.safetensors -P ./models/checkpoints/\n",
"\n",
"\n",
"# VAE\n",
"!wget -c https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors -P ./models/vae/\n",
"#!wget -c https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt -P ./models/vae/\n",
"#!wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt -P ./models/vae/\n",
"\n",
"\n",
"# Loras\n",
"#!wget -c https://civitai.com/api/download/models/10350 -O ./models/loras/theovercomer8sContrastFix_sd21768.safetensors #theovercomer8sContrastFix SD2.x 768-v\n",
"#!wget -c https://civitai.com/api/download/models/10638 -O ./models/loras/theovercomer8sContrastFix_sd15.safetensors #theovercomer8sContrastFix SD1.x\n",
"#!wget -c https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors -P ./models/loras/ #SDXL offset noise lora\n",
"\n",
"\n",
"# T2I-Adapter\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_depth_sd14v1.pth -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_seg_sd14v1.pth -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_keypose_sd14v1.pth -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_color_sd14v1.pth -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_canny_sd14v1.pth -P ./models/controlnet/\n",
"\n",
"# T2I Styles Model\n",
"#!wget -c https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_style_sd14v1.pth -P ./models/style_models/\n",
"\n",
"# CLIPVision model (needed for styles model)\n",
"#!wget -c https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin -O ./models/clip_vision/clip_vit14.bin\n",
"\n",
"\n",
"# ControlNet\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_scribble_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_seg_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_softedge_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11u_sd15_tile_fp16.safetensors -P ./models/controlnet/\n",
"\n",
"# ControlNet SDXL\n",
"#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-recolor-rank256.safetensors -P ./models/controlnet/\n",
"#!wget -c https://huggingface.co/stabilityai/control-lora/resolve/main/control-LoRAs-rank256/control-lora-sketch-rank256.safetensors -P ./models/controlnet/\n",
"\n",
"# Controlnet Preprocessor nodes by Fannovel16\n",
"#!cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors; cd comfy_controlnet_preprocessors && python install.py\n",
"\n",
"\n",
"# GLIGEN\n",
"#!wget -c https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors/resolve/main/gligen_sd14_textbox_pruned_fp16.safetensors -P ./models/gligen/\n",
"\n",
"\n",
"# ESRGAN upscale model\n",
"#!wget -c https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P ./models/upscale_models/\n",
"#!wget -c https://huggingface.co/sberbank-ai/Real-ESRGAN/resolve/main/RealESRGAN_x2.pth -P ./models/upscale_models/\n",
"#!wget -c https://huggingface.co/sberbank-ai/Real-ESRGAN/resolve/main/RealESRGAN_x4.pth -P ./models/upscale_models/\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kkkkkkkkkkkkkkk"
},
"source": [
"### Run ComfyUI with cloudflared (Recommended Way)\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jjjjjjjjjjjjjj"
},
"outputs": [],
"source": [
"!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb\n",
"!dpkg -i cloudflared-linux-amd64.deb\n",
"\n",
"import subprocess\n",
"import threading\n",
"import time\n",
"import socket\n",
"import urllib.request\n",
"\n",
"def iframe_thread(port):\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', port))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
" print(\"\\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\\n\")\n",
"\n",
" p = subprocess.Popen([\"cloudflared\", \"tunnel\", \"--url\", \"http://127.0.0.1:{}\".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n",
" for line in p.stderr:\n",
" l = line.decode()\n",
" if \"trycloudflare.com \" in l:\n",
" print(\"This is the URL to access ComfyUI:\", l[l.find(\"http\"):], end='')\n",
" #print(l, end='')\n",
"\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n",
"\n",
"!python main.py --dont-print-server"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kkkkkkkkkkkkkk"
},
"source": [
"### Run ComfyUI with localtunnel\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jjjjjjjjjjjjj"
},
"outputs": [],
"source": [
"!npm install -g localtunnel\n",
"\n",
"import subprocess\n",
"import threading\n",
"import time\n",
"import socket\n",
"import urllib.request\n",
"\n",
"def iframe_thread(port):\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', port))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
" print(\"\\nComfyUI finished loading, trying to launch localtunnel (if it gets stuck here localtunnel is having issues)\\n\")\n",
"\n",
" print(\"The password/enpoint ip for localtunnel is:\", urllib.request.urlopen('https://ipv4.icanhazip.com').read().decode('utf8').strip(\"\\n\"))\n",
" p = subprocess.Popen([\"lt\", \"--port\", \"{}\".format(port)], stdout=subprocess.PIPE)\n",
" for line in p.stdout:\n",
" print(line.decode(), end='')\n",
"\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n",
"\n",
"!python main.py --dont-print-server"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gggggggggg"
},
"source": [
"### Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work)\n",
"\n",
"You should see the ui appear in an iframe. If you get a 403 error, it's your firefox settings or an extension that's messing things up.\n",
"\n",
"If you want to open it in another window use the link.\n",
"\n",
"Note that some UI features like live image previews won't work because the colab iframe blocks websockets."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hhhhhhhhhh"
},
"outputs": [],
"source": [
"import threading\n",
"import time\n",
"import socket\n",
"def iframe_thread(port):\n",
" while True:\n",
" time.sleep(0.5)\n",
" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n",
" result = sock.connect_ex(('127.0.0.1', port))\n",
" if result == 0:\n",
" break\n",
" sock.close()\n",
" from google.colab import output\n",
" output.serve_kernel_port_as_iframe(port, height=1024)\n",
" print(\"to open it in a window you can open this link here:\")\n",
" output.serve_kernel_port_as_window(port)\n",
"\n",
"threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()\n",
"\n",
"!python main.py --dont-print-server"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"provenance": []
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@ -1,5 +1,4 @@
torch torch
torchaudio
torchvision torchvision
torchdiffeq>=0.2.3 torchdiffeq>=0.2.3
torchsde>=0.2.6 torchsde>=0.2.6
@ -9,7 +8,7 @@ transformers>=4.29.1
safetensors>=0.3.0 safetensors>=0.3.0
pytorch-lightning>=2.0.0 pytorch-lightning>=2.0.0
aiohttp>=3.8.4 aiohttp>=3.8.4
accelerate>=0.18.0 accelerate>=0.25.0
pyyaml>=6.0 pyyaml>=6.0
scikit-image>=0.20.0 scikit-image>=0.20.0
jsonmerge>=1.9.0 jsonmerge>=1.9.0
@ -25,7 +24,7 @@ importlib_resources
Pillow Pillow
scipy scipy
tqdm tqdm
protobuf==3.20.3 protobuf
psutil psutil
ConfigArgParse ConfigArgParse
aio-pika aio-pika

View File

@ -49,15 +49,6 @@ Indicates if this is installing an editable (develop) mode package
is_editable = '--editable' in sys.argv or '-e' in sys.argv or ( is_editable = '--editable' in sys.argv or '-e' in sys.argv or (
'python' in sys.argv and 'setup.py' in sys.argv and 'develop' in sys.argv) 'python' in sys.argv and 'setup.py' in sys.argv and 'develop' in sys.argv)
# If we're installing with no build isolation, we can check if torch is already installed in the environment, and if so,
# go ahead and use the version that is already installed.
is_build_isolated_and_torch_version: Optional[str]
try:
import torch
print(f"comfyui setup.py: torch version was {torch.__version__} and built without build isolation, using this torch instead of upgrading", file=sys.stderr)
is_build_isolated_and_torch_version = torch.__version__
except Exception as e:
is_build_isolated_and_torch_version = None
def _is_nvidia() -> bool: def _is_nvidia() -> bool:
system = platform.system().lower() system = platform.system().lower()
@ -111,13 +102,21 @@ def _is_linux_arm64():
def dependencies() -> List[str]: def dependencies() -> List[str]:
_dependencies = open(os.path.join(os.path.dirname(__file__), "requirements.txt")).readlines() _dependencies = open(os.path.join(os.path.dirname(__file__), "requirements.txt")).readlines()
# torch is already installed, and we could have only known this if the user specifically requested a # If we're installing with no build isolation, we can check if torch is already installed in the environment, and if
# no-build-isolation build, so the user knows what is going on # so, go ahead and use the version that is already installed.
if is_build_isolated_and_torch_version is not None: existing_torch: Optional[str]
try:
import torch
print(f"comfyui setup.py: torch version was {torch.__version__} and built without build isolation, using this torch instead of upgrading", file=sys.stderr)
existing_torch = torch.__version__
except Exception:
existing_torch = None
if existing_torch is not None:
for i, dep in enumerate(_dependencies): for i, dep in enumerate(_dependencies):
stripped = dep.strip() stripped = dep.strip()
if stripped == "torch": if stripped == "torch":
_dependencies[i] = f"{stripped}=={is_build_isolated_and_torch_version}" _dependencies[i] = f"{stripped}=={existing_torch}"
break break
return _dependencies return _dependencies
_alternative_indices = [amd_torch_index, nvidia_torch_index] _alternative_indices = [amd_torch_index, nvidia_torch_index]