Fix issues with directories and running on macOS

- include detailed runtime instructions for Windows and macOS
 - include instructions for agreeing to use Hugging Face repositories
 - always create directories by default when run interactively
 - model downloader now supports multiple folder names for known models
 - improve logging in sd.py
This commit is contained in:
doctorpangloss 2024-12-18 15:37:16 -08:00
parent 65f3be4f8f
commit 86b15084d5
5 changed files with 193 additions and 106 deletions

197
README.md
View File

@ -97,66 +97,59 @@ A vanilla, up-to-date fork of [ComfyUI](https://github.com/comfyanonymous/comfyu
## Installing
You must have Python 3.10, 3.11 or 3.12 installed. On Windows, download the latest Python from the Python website.
These instructions will install an interactive ComfyUI using the command line.
On macOS, you will need Python 3.10, 3.11 or 3.12, which is easy to install using [`brew`](https://brew.sh): `brew install python@3.12`. You can check which version of Python you have installed using `python --version`.
### Windows
When using Windows, open the **Windows Powershell** app. Then observe you are at a command line, and it is printing "where" you are in your file system: your user directory (e.g., `C:\Users\doctorpangloss`). This is where a bunch of files will go. If you want files to go somewhere else, consult a chat bot for the basics of using command lines, because it is beyond the scope of this document. Then:
1. Create a virtual environment:
```shell
python -m venv venv
```
2. Activate it on
1. Install Python 3.12, 3.11 or 3.10. You can do this from the Python website; or, you can use `chocolatey`, a Windows package manager:
```shell
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
choco install -y python --version 3.12.6
```
2. Install `uv`, which makes subsequent installation of Python packages much faster:
```shell
choco install -y uv
```
3. Switch into a directory that you want to store your outputs, custom nodes and models in. This is your ComfyUI workspace. For example, if you want to store your workspace in a directory called `ComfyUI_Workspace` in your Documents folder:
```powershell
mkdir ~/Documents/ComfyUI_Workspace
cd ~/Documents/ComfyUI_Workspace
```
4. Create a virtual environment:
```shell
uv venv --seed --python 3.12
```
5. Activate it on
**Windows (PowerShell):**
```pwsh
```powershell
Set-ExecutionPolicy Unrestricted -Scope Process
& .\venv\Scripts\activate.ps1
& .\.venv\Scripts\activate.ps1
```
**Linux and macOS**
```shell
source ./venv/bin/activate
6. Run the following command to install `comfyui` into your current environment. This will correctly select the version of `torch` that matches the GPU on your machine (NVIDIA or CPU on Windows, NVIDIA, Intel, AMD or CPU on Linux, CPU on macOS):
```powershell
uv pip install setuptools wheel
uv pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
3. Run the following command to install `comfyui` into your current environment. This will correctly select the version of `torch` that matches the GPU on your machine (NVIDIA or CPU on Windows, NVIDIA, Intel, AMD or CPU on Linux, CPU on macOS):
```shell
pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
**Recommended**: Currently, `torch 2.5.0` is the latest version that `xformers` is compatible with. On Windows, install it first, along with `xformers`, for maximum compatibility and the best performance without advanced techniques in ComfyUI:
```shell
pip install torch==2.5.1+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install --no-build-isolation --no-deps xformers==0.0.28.post3 --index-url https://download.pytorch.org/whl/
pip install comfyui@git+https://github.com/hiddenswitch/ComfyUI.git
**Recommended**: Install `xformers`:
```powershell
uv pip install --no-build-isolation --no-deps xformers==0.0.28.post3 --index-url https://download.pytorch.org/whl/
```
To enable `torchaudio` support on Windows, install it directly:
```shell
pip install torchaudio==2.5.0+cu121 --index-url https://download.pytorch.org/whl/cu121
```powershell
uv pip install torchaudio==2.5.1+cu121 --index-url https://download.pytorch.org/whl/cu121
```
**Advanced**: If you are running in Google Collab or another environment which has already installed `torch` for you; or, if you are an application developer:
```shell
# You will need wheel, which isn't included in Python 3.11 or later
pip install wheel
pip install --no-build-isolation comfyui@git+https://github.com/hiddenswitch/ComfyUI.git
```
This will use your pre-installed torch. This is also the appropriate dependency for packages, and is the one published to `pypi`. To automatically install with `torch` nightlies, use:
```shell
pip install comfyui[withtorchnightlies]@git+https://github.com/hiddenswitch/ComfyUI.git
```
4. Create the directories you can fill with checkpoints:
```shell
comfyui --create-directories
```
Your current working directory is wherever you started running `comfyui`. You don't need to clone this repository, observe it is omitted from the instructions.
You can `cd` into a different directory containing `models/`, or if the models are located somehwere else, like `C:/some directory/models`, do:
```shell
comfyui --cwd="C:/some directory/"
```
You can see all the command line options with hints using `comfyui --help`.
5. To run the web server:
7. To run the web server:
```shell
comfyui
```
@ -167,31 +160,113 @@ When using Windows, open the **Windows Powershell** app. Then observe you are at
comfyui --listen
```
**Running**
On Windows, you will need to open PowerShell and activate your virtual environment whenever you want to run `comfyui`.
```powershell
& .\venv\Scripts\activate.ps1
cd ~\Documents\ComfyUI_Workspace\
& .venv\Scripts\activate.ps1
comfyui
```
Upgrades are delivered frequently and automatically. To force one immediately, run pip upgrade like so:
```shell
pip install --no-build-isolation --no-deps --upgrade comfyui@git+https://github.com/hiddenswitch/ComfyUI.git
uv pip install --no-build-isolation --upgrade "comfyui@git+https://github.com/hiddenswitch/ComfyUI.git"
```
**Advanced: Using `uv`**:
### macOS
`uv` is a significantly faster and improved Python package manager. On Windows, use the following commands to install the package from scratch about 6x faster than vanilla `pip`:
1. Install Python 3.10, 3.11 or 3.12. This should be achieved by installing `brew`, a macOS package manager, first:
```shell
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
Then, install `python` and `uv`:
```shell
HOMEBREW_NO_AUTO_UPDATE=1 brew install python@3.12 uv
```
3. Switch into a directory that you want to store your outputs, custom nodes and models in. This is your ComfyUI workspace. For example, if you want to store your workspace in a directory called `ComfyUI_Workspace` in your Documents folder:
```powershell
uv venv --seed
& .\venv\Scripts\activate.ps1
uv pip install comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git
python -m comfy.cmd.main
```shell
mkdir -pv ~/Documents/ComfyUI_Workspace
cd ~/Documents/ComfyUI_Workspace
```
4. Create a virtual environment:
```shell
uv venv --seed --python 3.12
```
5. Activate it on
**macOS**
```shell
source .venv/bin/activate
```
6. Run the following command to install `comfyui` into your current environment. This will correctly select the version of `torch` that matches the GPU on your machine (NVIDIA or CPU on Windows, NVIDIA, Intel, AMD or CPU on Linux, CPU on macOS):
```shell
uv pip install setuptools wheel
uv pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
```
To enable `torchaudio` support, install it directly:
```shell
uv pip install torchaudio --index-url https://download.pytorch.org/whl/
```
7. To run the web server:
```shell
comfyui
```
When you run workflows that use well-known models, this will download them automatically.
To make it accessible over the network:
```shell
comfyui --listen
```
**Running**
On macOS, you will need to open the terminal and activate your virtual environment whenever you want to run `comfyui`.
```shell
cd ~/Documents/ComfyUI_Workspace/
source .venv/bin/activate
comfyui
```
### LTS Custom Nodes
## Model Downloading
ComfyUI LTS supports downloading models on demand.
Known models will be downloaded from Hugging Face or CivitAI.
To support licensed models like Flux, you will need to login to Hugging Face from the command line.
1. Activate your Python environment by `cd` followed by your workspace directory. For example, if your workspace is located in `~/Documents/ComfyUI_Workspace`, do:
```shell
cd ~/Documents/ComfyUI_Workspace
```
Then, on Windows: `& .venv/scripts/activate.ps1`; on macOS: `source .venv/bin/activate`.
2. Login with Huggingface:
```shell
uv pip install huggingface-cli
huggingface-cli login
```
3. Agree to the terms for a repository. For example, visit https://huggingface.co/black-forest-labs/FLUX.1-dev, login with your HuggingFace account, then choose **Agree**.
To disable model downloading, start with the command line argument `--disable-known-models`: `comfyui --disable-known-models`. However, this will generally only increase your toil for no return.
### Saving Space on Windows
To save space, you will need to enable Developer Mode in the Windows Settings, then reboot your computer. This way, Hugging Face can download models into a common place for all your apps, and place small "link" files that ComfyUI and others can read instead of whole copies of models.
## LTS Custom Nodes
These packages have been adapted to be installable with `pip` and download models to the correct places:
@ -242,16 +317,6 @@ You can enable experimental memory efficient attention on pytorch 2.5 in ComfyUI
You can also try setting this env variable `PYTORCH_TUNABLEOP_ENABLED=1` which might speed things up at the cost of a very slow initial run.
### Model Downloading
ComfyUI LTS supports downloading models on demand. Its list of known models includes the most notable and common Stable Diffusion architecture checkpoints, slider LoRAs, all the notable ControlNets for SD1.5 and SDXL, and a small selection of LLM models. Additionally, all other supported LTS nodes will download models using the same mechanisms. This means that you will save storage space and time: you won't have to ever figure out the "right name" for a model, where to download it from, or where to put it ever again.
Known models will be downloaded from Hugging Face or CivitAI. Hugging Face has a thoughtful approach to file downloading and organization. This means you do not have to toil about having one, or many, files, or worry about where to put them.
On Windows platforms, symbolic links should be enabled to minimize the amount of space used: Enable Developer Mode in the Windows Settings, then reboot your computer. This way, Hugging Face can download models into a common place for all your apps, and place small "link" files that ComfyUI and others can read instead of whole copies of models.
To disable model downloading, start with the command line argument `--disable-known-models`: `comfyui --disable-known-models`. However, this will generally only increase your toil for no return.
## Manual Install (Windows, Linux, macOS) For Development
1. Clone this repo:

View File

@ -178,8 +178,14 @@ def get_temp_directory() -> str:
return str(_resolve_path_with_compatibility(_folder_names_and_paths().application_paths.temp_directory))
def get_input_directory() -> str:
return str(_resolve_path_with_compatibility(_folder_names_and_paths().application_paths.input_directory))
def get_input_directory(mkdirs=True) -> str:
res = str(_resolve_path_with_compatibility(_folder_names_and_paths().application_paths.input_directory))
if mkdirs:
try:
os.makedirs(res, exist_ok=True)
except Exception as exc_info:
logger.warning(f"could not create directory {res} when trying to access input directory", exc_info)
return res
def get_user_directory() -> str:

View File

@ -148,12 +148,8 @@ async def main(from_script_dir: Optional[Path] = None):
for config_path in itertools.chain(*args.extra_model_paths_config):
load_extra_path_config(config_path)
# create the default directories if we're instructed to, then exit
# or, if it's a windows standalone build, the single .exe file should have its side-by-side directories always created
if args.create_directories:
import_all_nodes_in_workspace(raise_on_failure=False)
folder_paths.create_directories()
return
# always create directories when started interactively
folder_paths.create_directories()
if args.windows_standalone_build:
folder_paths.create_directories()
@ -225,6 +221,11 @@ async def main(from_script_dir: Optional[Path] = None):
# for CI purposes, try importing all the nodes
import_all_nodes_in_workspace(raise_on_failure=True)
exit(0)
else:
# we no longer lazily load nodes. we'll do it now for the sake of creating directories
import_all_nodes_in_workspace(raise_on_failure=False)
# now that nodes are loaded, create more directories if appropriate
folder_paths.create_directories()
call_on_start = None
if args.auto_launch:

View File

@ -5,11 +5,12 @@ import logging
import operator
import os
import shutil
from collections.abc import Sequence, MutableSequence
from functools import reduce
from itertools import chain
from os.path import join
from pathlib import Path
from typing import List, Optional, Sequence, Final, Set, MutableSequence
from typing import List, Optional, Final, Set
import tqdm
from huggingface_hub import hf_hub_download, scan_cache_dir, snapshot_download, HfFileSystem
@ -204,18 +205,31 @@ Visit the repository, accept the terms, and then do one of the following:
class KnownDownloadables(collections.UserList[Downloadable]):
def __init__(self, data, folder_name: Optional[str] = None):
# we're not invoking the constructor because we want a reference to the passed list
# noinspection PyMissingConstructor
def __init__(self, data, folder_name: Optional[str | Sequence[str]] = None, folder_names: Optional[Sequence[str]] = None):
# this should be a view
self.data = data
self._folder_name = folder_name
folder_names = folder_names or []
if isinstance(folder_name, str):
folder_names.append(folder_name)
elif folder_name is not None and hasattr(folder_name, "__getitem__") and len(folder_name[0]) > 1:
folder_names += folder_name
self._folder_names = folder_names
@property
def folder_name(self) -> str:
return self._folder_name
def folder_names(self) -> list[str]:
return self._folder_names
@folder_name.setter
def folder_name(self, value: str):
self._folder_name = value
@folder_names.setter
def folder_names(self, value: list[str]):
self._folder_names = value
def __contains__(self, item):
if isinstance(item, str):
return item in self._folder_names
else:
return item in self.data
KNOWN_CHECKPOINTS: Final[KnownDownloadables] = KnownDownloadables([
@ -445,7 +459,7 @@ KNOWN_UNET_MODELS: Final[KnownDownloadables] = KnownDownloadables([
HuggingFile("Kijai/flux-fp8", "flux1-schnell-fp8.safetensors"),
HuggingFile("Comfy-Org/mochi_preview_repackaged", "split_files/diffusion_models/mochi_preview_bf16.safetensors"),
HuggingFile("Comfy-Org/mochi_preview_repackaged", "split_files/diffusion_models/mochi_preview_fp8_scaled.safetensors"),
], folder_name="diffusion_models")
], folder_names=["diffusion_models", "unet"])
KNOWN_CLIP_MODELS: Final[KnownDownloadables] = KnownDownloadables([
# todo: is this correct?
@ -457,7 +471,7 @@ KNOWN_CLIP_MODELS: Final[KnownDownloadables] = KnownDownloadables([
# uses names from https://comfyanonymous.github.io/ComfyUI_examples/audio/
HuggingFile("google-t5/t5-base", "model.safetensors", save_with_filename="t5_base.safetensors"),
HuggingFile("zer0int/CLIP-GmP-ViT-L-14", "ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors"),
], folder_name="clip")
], folder_names=["clip", "text_encoders"])
KNOWN_STYLE_MODELS: Final[KnownDownloadables] = KnownDownloadables([
HuggingFile("black-forest-labs/FLUX.1-Redux-dev", "flux1-redux-dev.safetensors"),
@ -486,7 +500,7 @@ def _is_known_model_in_models_db(obj: list[Downloadable] | KnownDownloadables):
def _get_known_models_for_folder_name(folder_name: str) -> List[Downloadable]:
return list(chain.from_iterable([candidate for candidate in _known_models_db if candidate.folder_name == folder_name]))
return list(chain.from_iterable([candidate for candidate in _known_models_db if folder_name in candidate]))
def add_known_models(folder_name: str, known_models: Optional[List[Downloadable]] | Downloadable = None, *models: Downloadable) -> MutableSequence[Downloadable]:
@ -496,7 +510,7 @@ def add_known_models(folder_name: str, known_models: Optional[List[Downloadable]
if known_models is None:
try:
known_models = next(candidate for candidate in _known_models_db if candidate.folder_name == folder_name)
known_models = next(candidate for candidate in _known_models_db if folder_name in candidate)
except StopIteration:
add_model_folder_path(folder_name, extensions=supported_pt_extensions)
known_models = KnownDownloadables([], folder_name=folder_name)

View File

@ -43,6 +43,7 @@ from .text_encoders import sd2_clip
from .text_encoders import sd3_clip
from .utils import ProgressBar
logger = logging.getLogger(__name__)
def load_lora_for_models(model, clip, _lora, strength_model, strength_clip):
key_map = {}
@ -70,7 +71,7 @@ def load_lora_for_models(model, clip, _lora, strength_model, strength_clip):
k1 = set(k1)
for x in loaded:
if (x not in k) and (x not in k1):
logging.warning("NOT LOADED {}".format(x))
logger.warning("NOT LOADED {}".format(x))
return (new_modelpatcher, new_clip)
@ -105,7 +106,7 @@ class CLIP:
load_device = offload_device
if params['device'] != offload_device:
self.cond_stage_model.to(offload_device)
logging.warning("Had to shift TE back.")
logger.warning("Had to shift TE back.")
self.tokenizer: "sd1_clip.SD1Tokenizer" = tokenizer(embedding_directory=embedding_directory, tokenizer_data=tokenizer_data)
self.patcher = model_patcher.ModelPatcher(self.cond_stage_model, load_device=load_device, offload_device=offload_device)
@ -116,7 +117,7 @@ class CLIP:
model_management.load_models_gpu([self.patcher], force_full_load=True)
self.layer_idx = None
self.use_clip_schedule = False
logging.debug("CLIP model load device: {}, offload device: {}, current: {}".format(load_device, offload_device, params['device']))
logger.debug("CLIP model load device: {}, offload device: {}, current: {}".format(load_device, offload_device, params['device']))
def clone(self):
n = CLIP(no_init=True)
@ -355,7 +356,7 @@ class VAE:
self.upscale_ratio = (lambda a: max(0, a * 8 - 7), 32, 32)
self.working_dtypes = [torch.bfloat16, torch.float32]
else:
logging.warning("WARNING: No VAE weights detected, VAE not initalized.")
logger.warning("WARNING: No VAE weights detected, VAE not initalized.")
self.first_stage_model = None
return
else:
@ -364,10 +365,10 @@ class VAE:
m, u = self.first_stage_model.load_state_dict(sd, strict=False)
if len(m) > 0:
logging.warning("Missing VAE keys {}".format(m))
logger.warning("Missing VAE keys {}".format(m))
if len(u) > 0:
logging.debug("Leftover VAE keys {}".format(u))
logger.debug("Leftover VAE keys {}".format(u))
if device is None:
device = model_management.vae_device()
@ -380,7 +381,7 @@ class VAE:
self.output_device = model_management.intermediate_device()
self.patcher = model_patcher.ModelPatcher(self.first_stage_model, load_device=self.device, offload_device=offload_device)
logging.debug("VAE load device: {}, offload device: {}, dtype: {}".format(self.device, offload_device, self.vae_dtype))
logger.debug("VAE load device: {}, offload device: {}, dtype: {}".format(self.device, offload_device, self.vae_dtype))
def vae_encode_crop_pixels(self, pixels):
dims = pixels.shape[1:-1]
@ -446,7 +447,7 @@ class VAE:
pixel_samples = torch.empty((samples_in.shape[0],) + tuple(out.shape[1:]), device=self.output_device)
pixel_samples[x:x + batch_number] = out
except model_management.OOM_EXCEPTION as e:
logging.warning("Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.")
logger.warning("Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.")
dims = samples_in.ndim - 2
if dims == 1:
pixel_samples = self.decode_tiled_1d(samples_in)
@ -503,7 +504,7 @@ class VAE:
samples[x:x + batch_number] = out
except model_management.OOM_EXCEPTION as e:
logging.warning("Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding.")
logger.warning("Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding.")
if len(pixel_samples.shape) == 3:
samples = self.encode_tiled_1d(pixel_samples)
else:
@ -695,10 +696,10 @@ def load_text_encoder_state_dicts(state_dicts=[], embedding_directory=None, clip
for c in clip_data:
m, u = clip.load_sd(c)
if len(m) > 0:
logging.warning("clip missing: {}".format(m))
logger.warning("clip missing: {}".format(m))
if len(u) > 0:
logging.debug("clip unexpected: {}".format(u))
logger.debug("clip unexpected: {}".format(u))
return clip
@ -711,7 +712,7 @@ def load_gligen(ckpt_path):
def load_checkpoint(config_path=None, ckpt_path=None, output_vae=True, output_clip=True, embedding_directory=None, state_dict=None, config=None):
logging.warning("Warning: The load checkpoint with config function is deprecated and will eventually be removed, please use the other one.")
logger.warning("Warning: The load checkpoint with config function is deprecated and will eventually be removed, please use the other one.")
model, clip, vae, _ = load_checkpoint_guess_config(ckpt_path, output_vae=output_vae, output_clip=output_clip, output_clipvision=False, embedding_directory=embedding_directory, output_model=True)
# TODO: this function is a mess and should be removed eventually
if config is None:
@ -809,18 +810,18 @@ def load_state_dict_guess_config(sd, output_vae=True, output_clip=True, output_c
if len(m) > 0:
m_filter = list(filter(lambda a: ".logit_scale" not in a and ".transformer.text_projection.weight" not in a, m))
if len(m_filter) > 0:
logging.warning("clip missing: {}".format(m))
logger.warning("clip missing: {}".format(m))
else:
logging.debug("clip missing: {}".format(m))
logger.debug("clip missing: {}".format(m))
if len(u) > 0:
logging.debug("clip unexpected {}:".format(u))
logger.debug("clip unexpected {}:".format(u))
else:
logging.warning("no CLIP/text encoder weights in checkpoint, the text encoder model will not be loaded.")
logger.warning(f"no CLIP/text encoder weights in checkpoint {ckpt_path}, the text encoder model will not be loaded.")
left_over = sd.keys()
if len(left_over) > 0:
logging.debug("left over keys: {}".format(left_over))
logger.debug("left over keys: {}".format(left_over))
if output_model:
_model_patcher = model_patcher.ModelPatcher(model, load_device=load_device, offload_device=model_management.unet_offload_device(), ckpt_name=os.path.basename(ckpt_path))
@ -866,7 +867,7 @@ def load_diffusion_model_state_dict(sd, model_options: dict = None, ckpt_path: O
if k in sd:
new_sd[diffusers_keys[k]] = sd.pop(k)
else:
logging.warning("{} {}".format(diffusers_keys[k], k))
logger.warning("{} {}".format(diffusers_keys[k], k))
offload_device = model_management.unet_offload_device()
unet_weight_dtype = list(model_config.supported_inference_dtypes)
@ -889,7 +890,7 @@ def load_diffusion_model_state_dict(sd, model_options: dict = None, ckpt_path: O
model.load_model_weights(new_sd, "")
left_over = sd.keys()
if len(left_over) > 0:
logging.info("left over keys in unet: {}".format(left_over))
logger.info("left over keys in unet: {}".format(left_over))
return model_patcher.ModelPatcher(model, load_device=load_device, offload_device=offload_device, ckpt_name=os.path.basename(ckpt_path))
@ -899,7 +900,7 @@ def load_diffusion_model(unet_path, model_options: dict = None):
sd = utils.load_torch_file(unet_path)
model = load_diffusion_model_state_dict(sd, model_options=model_options, ckpt_path=unet_path)
if model is None:
logging.error("ERROR UNSUPPORTED UNET {}".format(unet_path))
logger.error("ERROR UNSUPPORTED UNET {}".format(unet_path))
raise RuntimeError("ERROR: Could not detect model type of: {}".format(unet_path))
return model