Commit Graph

392 Commits

Author SHA1 Message Date
Benjamin Berman
95b630224d fix issues with finding the path to the image file, no matter how the application was started 2023-08-11 15:26:13 -07:00
Benjamin Berman
10b8be1c1f fix erroneous sys.version 2023-08-07 17:04:02 -07:00
Benjamin Berman
3216d1c6b0 fix image upload, running from main 2023-08-04 15:44:02 -07:00
Benjamin Berman
dc4289dbb9 working install from git repo 2023-08-04 15:44:02 -07:00
Benjamin Berman
b12394c9e9 include legacy main.py 2023-08-04 15:44:02 -07:00
Benjamin Berman
3d3c5ae344 wip 2023-08-04 15:44:02 -07:00
Benjamin Berman
82d0edf121 add missing type 2023-08-04 15:44:02 -07:00
Benjamin Berman
b3038de648 make the nodes organization more sane 2023-08-04 15:44:02 -07:00
Benjamin Berman
87cf8f613e mara nodes 2023-08-04 15:44:02 -07:00
Benjamin Berman
66b857d069 basic caching 2023-08-04 15:44:02 -07:00
Benjamin Berman
7c197409be macOS latest with fp16 enabled by default 2023-08-04 15:44:02 -07:00
Benjamin Berman
550788eaca Updates for better compatibility with DeepFloyd 2023-08-04 15:44:02 -07:00
Benjamin Berman
65722c2bb3 openapi definition, documented API endpoints and new ergonomic API endpoint 2023-08-04 15:44:02 -07:00
Benjamin Berman
18d2b23495 Create a setup.py and automatically select the correct pytorch binaries for the current platform and supported devices
- setup.py now works
 - Makes installation work a variety of ways, including making other packages dependent on this one for e.g. plugins
 - Fixes missing __init__.py issues
 - Fixes imports
 - Compatible with your existing scripts that rely on requirements.txt
 - Fixes error in comfy/ldm/models/diffusion/ddim.py
 - Fixes missing packages for other diffusers code in this repo
2023-08-04 15:44:02 -07:00
comfyanonymous
1ce0d8ad68 Add CMP 30HX card to the nvidia_16_series list. 2023-08-04 12:08:45 -04:00
comfyanonymous
c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 2023-08-03 20:27:50 -04:00
comfyanonymous
4a77fcd6ab Only shift text encoder to vram when CPU cores are under 8. 2023-07-31 00:08:54 -04:00
comfyanonymous
3cd31d0e24 Lower CPU thread check for running the text encoder on the CPU vs GPU. 2023-07-30 17:18:24 -04:00
comfyanonymous
2b13939044 Remove some useless code. 2023-07-30 14:13:33 -04:00
comfyanonymous
95d796fc85 Faster VAE loading. 2023-07-29 16:28:30 -04:00
comfyanonymous
4b957a0010 Initialize the unet directly on the target device. 2023-07-29 14:51:56 -04:00
comfyanonymous
c910b4a01c Remove unused code and torchdiffeq dependency. 2023-07-28 21:32:27 -04:00
comfyanonymous
1141029a4a Add --disable-metadata argument to disable saving metadata in files. 2023-07-28 12:31:41 -04:00
comfyanonymous
fbf5c51c1c Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI 2023-07-27 16:13:48 -04:00
comfyanonymous
68be24eead Remove some prints. 2023-07-27 16:12:43 -04:00
asagi4
1ea4d84691 Fix timestep ranges when batch_size > 1 2023-07-27 21:14:09 +03:00
comfyanonymous
5379051d16 Fix diffusers VAE loading. 2023-07-26 18:26:39 -04:00
comfyanonymous
727588d076 Fix some new loras. 2023-07-25 16:39:15 -04:00
comfyanonymous
4f9b6f39d1 Fix potential issue with Save Checkpoint. 2023-07-25 00:45:20 -04:00
comfyanonymous
5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2023-07-24 18:38:17 -04:00
comfyanonymous
7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2023-07-24 17:50:49 -04:00
comfyanonymous
d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2023-07-24 13:35:20 -04:00
comfyanonymous
0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2023-07-24 09:25:02 -04:00
comfyanonymous
22f29d66ca Try to fix memory issue with lora. 2023-07-22 21:38:56 -04:00
comfyanonymous
67be7eb81d Nodes can now patch the unet function. 2023-07-22 17:01:12 -04:00
comfyanonymous
12a6e93171 Del the right object when applying lora. 2023-07-22 11:25:49 -04:00
comfyanonymous
78e7958d17 Support controlnet in diffusers format. 2023-07-21 22:58:16 -04:00
comfyanonymous
09386a3697 Fix issue with lora in some cases when combined with model merging. 2023-07-21 21:27:27 -04:00
comfyanonymous
58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2023-07-21 14:38:56 -04:00
comfyanonymous
0115018695 Print errors and continue when lora weights are not compatible. 2023-07-20 19:56:22 -04:00
comfyanonymous
4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI 2023-07-20 00:34:54 -04:00
comfyanonymous
0b284f650b Fix typo. 2023-07-19 10:20:32 -04:00
comfyanonymous
e032ca6138 Fix ddim issue with older torch versions. 2023-07-19 10:16:00 -04:00
comfyanonymous
18885f803a Add MX450 and MX550 to list of cards with broken fp16. 2023-07-19 03:08:30 -04:00
comfyanonymous
9ba440995a It's actually possible to torch.compile the unet now. 2023-07-18 21:36:35 -04:00
comfyanonymous
51d5477579 Add key to indicate checkpoint is v_prediction when saving. 2023-07-18 00:25:53 -04:00
comfyanonymous
ff6b047a74 Fix device print on old torch version. 2023-07-17 15:18:58 -04:00
comfyanonymous
9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
2023-07-17 15:12:10 -04:00
comfyanonymous
55d0fca9fa --windows-standalone-build now enables --cuda-malloc 2023-07-17 14:10:36 -04:00
comfyanonymous
1679abd86d Add a command line argument to enable backend:cudaMallocAsync 2023-07-17 11:00:14 -04:00