Commit Graph

1507 Commits

Author SHA1 Message Date
doctorpangloss
e6bddb4a9c Fix pylint errors 2024-10-29 14:27:14 -07:00
doctorpangloss
b42e59d602 Fix tests 2024-10-29 14:27:14 -07:00
doctorpangloss
4a13766d14 --base-paths argument adds additional paths to search for models/checkpoints, models/loras, etc. directories, including directories specified in this pattern by custom nodes 2024-10-29 14:27:14 -07:00
doctorpangloss
a83b561ea7 Follow symlinks for statics so that packages can correctly serve files when installed with uv. Update version. 2024-10-15 11:01:46 -07:00
doctorpangloss
5412451def Handle custom_nodes returning None responses more gracefully 2024-10-15 11:01:21 -07:00
doctorpangloss
995807b4be Improve custom node compatibility by including this stub symbol 2024-10-15 10:13:28 -07:00
doctorpangloss
40902acc28 Use the HuggingFace file for dreamshaper 2024-10-15 10:13:13 -07:00
Benjamin Berman
e5fc19a25b Improve vanilla node importing and fix CUDA on CPU devices bug 2024-10-15 00:02:06 -07:00
Benjamin Berman
9c9df424b4 Fix CUDA package with no drivers 2024-10-14 22:56:21 -07:00
doctorpangloss
b0d606a282 Improve installation instructions with non-deprecated messaging. 0.2.3 is now directly written as the server version. 2024-10-14 15:54:21 -07:00
doctorpangloss
8512f361fe Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2024-10-14 15:26:27 -07:00
comfyanonymous
3c60ecd7a8 Fix fp8 ops staying enabled. 2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723 Remove useless argument. 2024-10-12 07:16:21 -04:00
comfyanonymous
6632365e16 model_options consistency between functions.
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c (#5210) 2024-10-11 20:37:50 -04:00
doctorpangloss
c0d1c9f96d Improve OpenAPI spec 2024-10-11 14:46:26 -07:00
doctorpangloss
ed078c2f1f Update web content 2024-10-11 14:00:16 -07:00
doctorpangloss
b5df6c64fa Update OpenAPI spec to be more accurate 2024-10-11 13:59:57 -07:00
doctorpangloss
79b465faf2 Increase server response timeouts 2024-10-11 13:52:17 -07:00
doctorpangloss
caa6a37936 Fix pylint error 2024-10-11 13:51:13 -07:00
doctorpangloss
1cc637cb4f Fix SDXL clip issue, fix website header issue 2024-10-10 22:46:52 -07:00
doctorpangloss
f3da381869 Fix inference mode execution issues 2024-10-10 21:00:09 -07:00
doctorpangloss
a38968f098 Improvements to execution
- Validation errors that occur early in the lifecycle of prompt
   execution now get propagated to their callers in the
   EmbeddedComfyClient. This includes error messages about missing node
   classes.
 - The execution context now includes the node_id and the prompt_id
 - Latent previews are now sent with a node_id. This is not backwards
   compatible with old frontends.
 - Dependency execution errors are now modeled correctly.
 - Distributed progress encodes image previews with node and prompt IDs.
 - Typing for models
 - The frontend was updated to use node IDs with previews
 - Improvements to torch.compile experiments
 - Some controlnet_aux nodes were upstreamed
2024-10-10 19:30:18 -07:00
doctorpangloss
69e523b89d Experimental quantization support. Only Linux is meaningfully supported 2024-10-10 13:43:06 -07:00
comfyanonymous
1b80895285 Make clip loader nodes support loading sd3 t5xxl in lower precision.
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
doctorpangloss
5f26b76f59 Gracefully handle running with cuda torch on CPU only devices 2024-10-10 10:42:22 -07:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 (#5121)
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention (#5191) 2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
doctorpangloss
c34403b574 Fix invalid device here 2024-10-09 11:21:19 -07:00
comfyanonymous
7ea7b2e77f Slightly improve the fast previews for flux by adding a bias. 2024-10-09 09:48:18 -07:00
comfyanonymous
9786ea4a17 Use torch.nn.functional.linear in RGB preview code.
Add an optional bias to the latent RGB preview code.
2024-10-09 09:48:17 -07:00
comfyanonymous
91f458061c Fix flux doras with diffusers keys. 2024-10-09 09:48:16 -07:00
City
7d1c420d19 Flux torch.compile fix (#5082) 2024-10-09 09:47:46 -07:00
doctorpangloss
99f0fa8b50 Enable sage attention autodetection 2024-10-09 09:27:05 -07:00
doctorpangloss
388dad67d5 Fix pylint errors in attention 2024-10-09 09:26:02 -07:00
doctorpangloss
bbe2ed330c Memory management and compilation improvements
- Experimental support for sage attention on Linux
 - Diffusers loader now supports model indices
 - Transformers model management now aligns with updates to ComfyUI
 - Flux layers correctly use unbind
 - Add float8 support for model loading in more places
 - Experimental quantization approaches from Quanto and torchao
 - Model upscaling interacts with memory management better

This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.
2024-10-09 09:13:47 -07:00
comfyanonymous
203942c8b2 Fix flux doras with diffusers keys. 2024-10-08 19:03:40 -04:00
comfyanonymous
8dfa0cc552 Make SD3 fast previews a little better. 2024-10-07 09:19:59 -04:00
comfyanonymous
e5ecdfdd2d Make fast previews for SDXL a little better by adding a bias. 2024-10-06 19:27:04 -04:00
comfyanonymous
7d29fbf74b Slightly improve the fast previews for flux by adding a bias. 2024-10-06 17:55:46 -04:00
comfyanonymous
7d2467e830 Some minor cleanups. 2024-10-05 13:22:39 -04:00
Benjamin Berman
0a25b67ff8 Fix pylint errors 2024-10-04 21:12:37 -07:00
Benjamin Berman
afbb8aa154 Fix #23 2024-10-04 21:10:19 -07:00
doctorpangloss
de45dd50c5 Improve vanilla node importing for execution nodes 2024-10-04 10:56:43 -07:00
comfyanonymous
6f021d8aa0 Let --verbose have an argument for the log level. 2024-10-04 10:05:34 -04:00
comfyanonymous
d854ed0bcf Allow using SD3 type te output on flux model. 2024-10-03 09:44:54 -04:00
comfyanonymous
abcd006b8c Allow more permutations of clip/t5 in dual clip loader. 2024-10-03 09:26:11 -04:00
comfyanonymous
d985d1d7dc CLIP Loader node now supports clip_l and clip_g only for SD3. 2024-10-02 04:25:17 -04:00
comfyanonymous
d1cdf51e1b Refactor some of the TE detection code. 2024-10-01 07:08:41 -04:00