Commit Graph

4341 Commits

Author SHA1 Message Date
doctorpangloss
5f26b76f59 Gracefully handle running with cuda torch on CPU only devices 2024-10-10 10:42:22 -07:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 (#5121)
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Chenlei Hu
14eba07acd
Update web content to release v1.3.11 (#5189)
* Update web content to release v1.3.11

* nit
2024-10-09 22:37:04 -04:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention (#5191) 2024-10-09 22:21:41 -04:00
Yoland Yan
25eac1d780
Change runner label for the new runners (#5197) 2024-10-09 20:08:57 -04:00
comfyanonymous
e38c94228b Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
doctorpangloss
c34403b574 Fix invalid device here 2024-10-09 11:21:19 -07:00
comfyanonymous
7ea7b2e77f Slightly improve the fast previews for flux by adding a bias. 2024-10-09 09:48:18 -07:00
comfyanonymous
9786ea4a17 Use torch.nn.functional.linear in RGB preview code.
Add an optional bias to the latent RGB preview code.
2024-10-09 09:48:17 -07:00
comfyanonymous
91f458061c Fix flux doras with diffusers keys. 2024-10-09 09:48:16 -07:00
City
7d1c420d19 Flux torch.compile fix (#5082) 2024-10-09 09:47:46 -07:00
doctorpangloss
c61c3fffad Fix pylint import errors 2024-10-09 09:43:50 -07:00
doctorpangloss
99f0fa8b50 Enable sage attention autodetection 2024-10-09 09:27:05 -07:00
doctorpangloss
388dad67d5 Fix pylint errors in attention 2024-10-09 09:26:02 -07:00
doctorpangloss
bbe2ed330c Memory management and compilation improvements
- Experimental support for sage attention on Linux
 - Diffusers loader now supports model indices
 - Transformers model management now aligns with updates to ComfyUI
 - Flux layers correctly use unbind
 - Add float8 support for model loading in more places
 - Experimental quantization approaches from Quanto and torchao
 - Model upscaling interacts with memory management better

This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.
2024-10-09 09:13:47 -07:00
comfyanonymous
203942c8b2 Fix flux doras with diffusers keys. 2024-10-08 19:03:40 -04:00
Brendan Hoar
3c72c89a52
Update folder_paths.py - try/catch for special file_name values (#5187)
Somehow managed to drop a file called "nul" into a windows checkpoints subdirectory. This caused all sorts of havoc with many nodes that needed the list of checkpoints.
2024-10-08 15:04:32 -04:00
Chenlei Hu
614377abd6
Update web content to release v1.2.64 (#5124) 2024-10-07 17:15:29 -04:00
comfyanonymous
8dfa0cc552 Make SD3 fast previews a little better. 2024-10-07 09:19:59 -04:00
comfyanonymous
e5ecdfdd2d Make fast previews for SDXL a little better by adding a bias. 2024-10-06 19:27:04 -04:00
comfyanonymous
7d29fbf74b Slightly improve the fast previews for flux by adding a bias. 2024-10-06 17:55:46 -04:00
Lex
2c641e64ad
IS_CHANGED should be a classmethod (#5159) 2024-10-06 05:47:51 -04:00
comfyanonymous
7d2467e830 Some minor cleanups. 2024-10-05 13:22:39 -04:00
Benjamin Berman
0a25b67ff8 Fix pylint errors 2024-10-04 21:12:37 -07:00
Benjamin Berman
afbb8aa154 Fix #23 2024-10-04 21:10:19 -07:00
doctorpangloss
de45dd50c5 Improve vanilla node importing for execution nodes 2024-10-04 10:56:43 -07:00
comfyanonymous
6f021d8aa0 Let --verbose have an argument for the log level. 2024-10-04 10:05:34 -04:00
comfyanonymous
d854ed0bcf Allow using SD3 type te output on flux model. 2024-10-03 09:44:54 -04:00
comfyanonymous
abcd006b8c Allow more permutations of clip/t5 in dual clip loader. 2024-10-03 09:26:11 -04:00
comfyanonymous
d985d1d7dc CLIP Loader node now supports clip_l and clip_g only for SD3. 2024-10-02 04:25:17 -04:00
comfyanonymous
d1cdf51e1b Refactor some of the TE detection code. 2024-10-01 07:08:41 -04:00
doctorpangloss
144fe6c421 Fix aiohttp bugs 2024-09-30 13:12:53 -07:00
comfyanonymous
b4626ab93e Add simpletuner lycoris format for SD unet. 2024-09-30 06:03:27 -04:00
comfyanonymous
a9e459c2a4 Use torch.nn.functional.linear in RGB preview code.
Add an optional bias to the latent RGB preview code.
2024-09-29 11:27:49 -04:00
comfyanonymous
3bb4dec720 Fix issue with loras, lowvram and --fast fp8. 2024-09-28 14:42:32 -04:00
City
8733191563
Flux torch.compile fix (#5082) 2024-09-27 22:07:51 -04:00
doctorpangloss
a91baf7175 Disable flakey tests 2024-09-27 12:46:35 -07:00
doctorpangloss
6ef2d534b6 Fix polling for history too quickly. This will need an alternative approach so that readiness is immediate 2024-09-27 12:46:28 -07:00
doctorpangloss
d25394d386 API now supports fire-and-forget, checking on queue status; prefetch_count now expressly set to 1 for workers 2024-09-27 12:07:54 -07:00
doctorpangloss
a664a1fbc9 Add Flux inpainting model 2024-09-27 12:06:58 -07:00
doctorpangloss
53056ca76f Fix upscale model interacting with B&W upscaling throwing exceptions in cases where the image is weird 2024-09-27 12:06:41 -07:00
comfyanonymous
83b01f960a Add backend option to TorchCompileModel.
If you want to use the cudagraphs backend you need to: --disable-cuda-malloc

If you get other backends working feel free to make a PR to add them.
2024-09-27 02:12:37 -04:00
doctorpangloss
f642c7cc26 Support more upscale models 2024-09-26 18:08:43 -07:00
doctorpangloss
667b77149e Improve scaling and fit for diffusion 2024-09-26 18:08:34 -07:00
doctorpangloss
dbc8ee92a5 Add method to make this congruent with aio client 2024-09-26 18:08:15 -07:00
doctorpangloss
ab1a1de7a4 Fix missing arg to add_model_folder_path 2024-09-26 13:26:52 -07:00
comfyanonymous
d72e871cfa Add a note that the experimental model downloader api will be removed. 2024-09-26 03:17:52 -04:00
doctorpangloss
d215707662 Image operation nodes 2024-09-25 22:13:07 -07:00
doctorpangloss
a78f20178d Fix linking error 2024-09-25 10:16:56 -07:00
comfyanonymous
037c3159b6 Move some nodes out of _for_testing. 2024-09-25 08:41:22 -04:00