comfyanonymous
ab885b33ba
Skip layer guidance node now works on LTX-Video.
2024-11-23 10:33:05 -05:00
doctorpangloss
b1ad9cad37
Known Flux controlnet models
2024-11-22 18:00:29 -08:00
comfyanonymous
839ed3368e
Some improvements to the lowvram unloading.
2024-11-22 20:59:15 -05:00
doctorpangloss
4b77c4941c
LTXV tests
2024-11-22 17:13:19 -08:00
doctorpangloss
fe64070b41
Fix bad merge of terminal_service
2024-11-22 15:55:51 -08:00
doctorpangloss
f39b8dfebc
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-11-22 15:50:23 -08:00
comfyanonymous
6e8cdcd3cb
Fix some tiled VAE decoding issues with LTX-Video.
2024-11-22 18:00:34 -05:00
comfyanonymous
e5c3f4b87f
LTXV lowvram fixes.
2024-11-22 17:17:11 -05:00
comfyanonymous
bc6be6c11e
Some fixes to the lowvram system.
2024-11-22 16:40:04 -05:00
comfyanonymous
5818f6cf51
Remove print.
2024-11-22 10:49:15 -05:00
comfyanonymous
5e16f1d24b
Support Lightricks LTX-Video model.
2024-11-22 08:46:39 -05:00
comfyanonymous
2fd9c1308a
Fix mask issue in some attention functions.
2024-11-22 02:10:09 -05:00
comfyanonymous
8f0009aad0
Support new flux model variants.
2024-11-21 08:38:23 -05:00
comfyanonymous
41444b5236
Add some new weight patching functionality.
...
Add a way to reshape lora weights.
Allow weight patches to all weight not just .weight and .bias
Add a way for a lora to set a weight to a specific value.
2024-11-21 07:19:17 -05:00
comfyanonymous
07f6eeaa13
Fix mask issue with attention_xformers.
2024-11-20 17:07:46 -05:00
comfyanonymous
22535d0589
Skip layer guidance now works on stable audio model.
2024-11-20 07:33:06 -05:00
doctorpangloss
9d20de6462
Merge branch 'improved_memory' of github.com:comfyanonymous/ComfyUI
2024-11-19 11:06:27 -08:00
comfyanonymous
b699a15062
Refactor inpaint/ip2p code.
2024-11-19 03:25:25 -05:00
doctorpangloss
8ba412897e
Mochi and SageAttention improvements
2024-11-18 15:40:15 -08:00
doctorpangloss
264d84db39
Fix Pylint warnings
2024-11-18 14:10:58 -08:00
doctorpangloss
fb7a3f9386
Update ComfyUI
...
- use their logger when running interactively
- move the extra nodes files to where this fork expects them
- add the mochi checkpoints to known models
- add a mochi workflow test
2024-11-18 13:58:24 -08:00
doctorpangloss
c0f072ee0f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-11-18 13:12:31 -08:00
comfyanonymous
d9f90965c8
Support block replace patches in auraflow.
2024-11-17 08:19:59 -05:00
comfyanonymous
41886af138
Add transformer options blocks replace patch to mochi.
2024-11-16 20:48:14 -05:00
doctorpangloss
4150dbbbe5
Tweaks to distributed queueing
...
- Do not auto delete the queue
- Make the queue durable
- Progress notifications expire
- Deprecation fix
2024-11-14 15:08:59 -08:00
comfyanonymous
3b9a6cf2b1
Fix issue with 3d masks.
2024-11-13 07:18:30 -05:00
comfyanonymous
8ebf2d8831
Add block replace transformer_options to flux.
2024-11-12 08:00:39 -05:00
doctorpangloss
44be2591df
Fix broken create-directories command
2024-11-11 16:21:16 -08:00
doctorpangloss
228794d835
Fix missing folder paths, fix #26 the protobuf compatibility issue manifests in 1.28
2024-11-11 13:35:57 -08:00
comfyanonymous
eb476e6ea9
Allow 1D masks for 1D latents.
2024-11-11 14:44:52 -05:00
comfyanonymous
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
comfyanonymous
2a18e98ccf
Refactor so that zsnr can be set in the sampling_settings.
2024-11-11 04:55:56 -05:00
comfyanonymous
bdeb1c171c
Fast previews for mochi.
2024-11-10 03:39:35 -05:00
comfyanonymous
8b90e50979
Properly handle and reshape masks when used on 3d latents.
2024-11-09 15:30:19 -05:00
comfyanonymous
2865f913f7
Free memory before doing tiled decode.
2024-11-07 04:01:24 -05:00
comfyanonymous
b49616f951
Make VAEDecodeTiled node work with video VAEs.
2024-11-07 03:47:12 -05:00
comfyanonymous
5e29e7a488
Remove scaled_fp8 key after reading it to silence warning.
2024-11-06 04:56:42 -05:00
comfyanonymous
8afb97cd3f
Fix unknown VAE being detected as the mochi VAE.
2024-11-05 03:43:27 -05:00
contentis
69694f40b3
fix dynamic shape export ( #5490 )
2024-11-04 14:59:28 -05:00
doctorpangloss
772e768fe8
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-11-04 10:17:26 -08:00
doctorpangloss
cde95eb71d
Improve logging and typing information for LoRA patches in ComfyUI
2024-11-04 09:38:13 -08:00
comfyanonymous
95972bab86
Fix issue.
2024-11-04 05:07:07 -05:00
comfyanonymous
6c9dbde7de
Fix mochi all in one checkpoint t5xxl key names.
2024-11-03 01:40:42 -05:00
comfyanonymous
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
doctorpangloss
021d0d4f57
Fix #25 custom nodes which have input paths set at import time will now correctly see a models directory (or similar) that respects the configuration intended by the user
2024-11-01 13:40:03 -07:00
doctorpangloss
31eacb6ac9
Improve compilation of models, adding support for triton
2024-11-01 10:40:58 -07:00
comfyanonymous
bd5d8f150f
Prevent and detect some types of memory leaks.
2024-11-01 06:55:42 -04:00
comfyanonymous
975927cc79
Remove useless function.
2024-11-01 04:40:33 -04:00
comfyanonymous
1735d4fb01
Fix issue.
2024-11-01 04:25:27 -04:00
comfyanonymous
d8bd2a9baa
Less fragile memory management.
2024-11-01 02:41:51 -04:00
Aarni Koskela
1c8286a44b
Avoid SyntaxWarning in UniPC docstring ( #5442 )
2024-10-31 15:17:26 -04:00
comfyanonymous
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
comfyanonymous
daa1565b93
Fix diffusers flux controlnet regression.
2024-10-30 13:11:34 -04:00
comfyanonymous
09fdb2b269
Support SD3.5 medium diffusers format weights and loras.
2024-10-30 04:24:00 -04:00
doctorpangloss
a5467b897d
Fix pylint error / 3.10 missing add_node
2024-10-29 19:37:06 -07:00
doctorpangloss
45299987f3
Mochi variable now correctly ferenced
2024-10-29 19:24:17 -07:00
doctorpangloss
b3ceeebf94
Fix bugs in folder paths
...
- Adding the output paths now correctly registers a relative path,
i.e., outputs/loras and models/lorals will now be searched on all
your base paths
- Adding absolute paths with models/ works better
- All the base paths and directories are queried better
2024-10-29 19:22:51 -07:00
doctorpangloss
a8d8bff548
Improve support for torch compilation and sage attention
2024-10-29 19:22:26 -07:00
doctorpangloss
76a80a65ea
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-29 15:35:39 -07:00
doctorpangloss
e6bddb4a9c
Fix pylint errors
2024-10-29 14:27:14 -07:00
doctorpangloss
b42e59d602
Fix tests
2024-10-29 14:27:14 -07:00
doctorpangloss
4a13766d14
--base-paths argument adds additional paths to search for models/checkpoints, models/loras, etc. directories, including directories specified in this pattern by custom nodes
2024-10-29 14:27:14 -07:00
comfyanonymous
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
comfyanonymous
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
comfyanonymous
c320801187
Remove useless line.
2024-10-28 17:41:12 -04:00
comfyanonymous
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
comfyanonymous
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
c3ffbae067
Make LatentUpscale nodes work on 3d latents.
2024-10-26 01:50:51 -04:00
comfyanonymous
d605677b33
Make euler_ancestral work on flow models (credit: Ashen).
2024-10-25 19:53:44 -04:00
PsychoLogicAu
af8cf79a2d
support SimpleTuner lycoris lora for SD3 ( #5340 )
2024-10-24 01:18:32 -04:00
comfyanonymous
66b0961a46
Fix ControlLora issue with last commit.
2024-10-23 17:02:40 -04:00
comfyanonymous
754597c8a9
Clean up some controlnet code.
...
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
comfyanonymous
915fdb5745
Fix lowvram edge case.
2024-10-22 16:34:50 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
comfyanonymous
8ce2a1052c
Optimizations to --fast and scaled fp8.
2024-10-22 02:12:28 -04:00
comfyanonymous
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
comfyanonymous
f9f9faface
Fixed model merging issue with scaled fp8.
2024-10-20 06:24:31 -04:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
comfyanonymous
73e3a9e676
Clamp output when rounding weight to prevent Nan.
2024-10-19 19:07:10 -04:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
comfyanonymous
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe .
2024-10-16 12:36:19 -04:00
doctorpangloss
a83b561ea7
Follow symlinks for statics so that packages can correctly serve files when installed with uv. Update version.
2024-10-15 11:01:46 -07:00
doctorpangloss
5412451def
Handle custom_nodes returning None responses more gracefully
2024-10-15 11:01:21 -07:00
doctorpangloss
995807b4be
Improve custom node compatibility by including this stub symbol
2024-10-15 10:13:28 -07:00
doctorpangloss
40902acc28
Use the HuggingFace file for dreamshaper
2024-10-15 10:13:13 -07:00
Benjamin Berman
e5fc19a25b
Improve vanilla node importing and fix CUDA on CPU devices bug
2024-10-15 00:02:06 -07:00
Benjamin Berman
9c9df424b4
Fix CUDA package with no drivers
2024-10-14 22:56:21 -07:00
comfyanonymous
f584758271
Cleanup some useless lines.
2024-10-14 21:02:39 -04:00
svdc
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
doctorpangloss
b0d606a282
Improve installation instructions with non-deprecated messaging. 0.2.3 is now directly written as the server version.
2024-10-14 15:54:21 -07:00
doctorpangloss
8512f361fe
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-14 15:26:27 -07:00
comfyanonymous
3c60ecd7a8
Fix fp8 ops staying enabled.
2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c ( #5210 )
2024-10-11 20:37:50 -04:00
doctorpangloss
c0d1c9f96d
Improve OpenAPI spec
2024-10-11 14:46:26 -07:00
doctorpangloss
ed078c2f1f
Update web content
2024-10-11 14:00:16 -07:00
doctorpangloss
b5df6c64fa
Update OpenAPI spec to be more accurate
2024-10-11 13:59:57 -07:00
doctorpangloss
79b465faf2
Increase server response timeouts
2024-10-11 13:52:17 -07:00
doctorpangloss
caa6a37936
Fix pylint error
2024-10-11 13:51:13 -07:00
doctorpangloss
1cc637cb4f
Fix SDXL clip issue, fix website header issue
2024-10-10 22:46:52 -07:00
doctorpangloss
f3da381869
Fix inference mode execution issues
2024-10-10 21:00:09 -07:00
doctorpangloss
a38968f098
Improvements to execution
...
- Validation errors that occur early in the lifecycle of prompt
execution now get propagated to their callers in the
EmbeddedComfyClient. This includes error messages about missing node
classes.
- The execution context now includes the node_id and the prompt_id
- Latent previews are now sent with a node_id. This is not backwards
compatible with old frontends.
- Dependency execution errors are now modeled correctly.
- Distributed progress encodes image previews with node and prompt IDs.
- Typing for models
- The frontend was updated to use node IDs with previews
- Improvements to torch.compile experiments
- Some controlnet_aux nodes were upstreamed
2024-10-10 19:30:18 -07:00
doctorpangloss
69e523b89d
Experimental quantization support. Only Linux is meaningfully supported
2024-10-10 13:43:06 -07:00
comfyanonymous
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
doctorpangloss
5f26b76f59
Gracefully handle running with cuda torch on CPU only devices
2024-10-10 10:42:22 -07:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
doctorpangloss
c34403b574
Fix invalid device here
2024-10-09 11:21:19 -07:00
comfyanonymous
7ea7b2e77f
Slightly improve the fast previews for flux by adding a bias.
2024-10-09 09:48:18 -07:00
comfyanonymous
9786ea4a17
Use torch.nn.functional.linear in RGB preview code.
...
Add an optional bias to the latent RGB preview code.
2024-10-09 09:48:17 -07:00
comfyanonymous
91f458061c
Fix flux doras with diffusers keys.
2024-10-09 09:48:16 -07:00
City
7d1c420d19
Flux torch.compile fix ( #5082 )
2024-10-09 09:47:46 -07:00
doctorpangloss
99f0fa8b50
Enable sage attention autodetection
2024-10-09 09:27:05 -07:00
doctorpangloss
388dad67d5
Fix pylint errors in attention
2024-10-09 09:26:02 -07:00
doctorpangloss
bbe2ed330c
Memory management and compilation improvements
...
- Experimental support for sage attention on Linux
- Diffusers loader now supports model indices
- Transformers model management now aligns with updates to ComfyUI
- Flux layers correctly use unbind
- Add float8 support for model loading in more places
- Experimental quantization approaches from Quanto and torchao
- Model upscaling interacts with memory management better
This update also disables ROCm testing because it isn't reliable enough
on consumer hardware. ROCm is not really supported by the 7600.
2024-10-09 09:13:47 -07:00
comfyanonymous
203942c8b2
Fix flux doras with diffusers keys.
2024-10-08 19:03:40 -04:00
comfyanonymous
8dfa0cc552
Make SD3 fast previews a little better.
2024-10-07 09:19:59 -04:00
comfyanonymous
e5ecdfdd2d
Make fast previews for SDXL a little better by adding a bias.
2024-10-06 19:27:04 -04:00
comfyanonymous
7d29fbf74b
Slightly improve the fast previews for flux by adding a bias.
2024-10-06 17:55:46 -04:00
comfyanonymous
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
Benjamin Berman
0a25b67ff8
Fix pylint errors
2024-10-04 21:12:37 -07:00
Benjamin Berman
afbb8aa154
Fix #23
2024-10-04 21:10:19 -07:00
doctorpangloss
de45dd50c5
Improve vanilla node importing for execution nodes
2024-10-04 10:56:43 -07:00
comfyanonymous
6f021d8aa0
Let --verbose have an argument for the log level.
2024-10-04 10:05:34 -04:00
comfyanonymous
d854ed0bcf
Allow using SD3 type te output on flux model.
2024-10-03 09:44:54 -04:00
comfyanonymous
abcd006b8c
Allow more permutations of clip/t5 in dual clip loader.
2024-10-03 09:26:11 -04:00
comfyanonymous
d985d1d7dc
CLIP Loader node now supports clip_l and clip_g only for SD3.
2024-10-02 04:25:17 -04:00
comfyanonymous
d1cdf51e1b
Refactor some of the TE detection code.
2024-10-01 07:08:41 -04:00
doctorpangloss
144fe6c421
Fix aiohttp bugs
2024-09-30 13:12:53 -07:00
comfyanonymous
b4626ab93e
Add simpletuner lycoris format for SD unet.
2024-09-30 06:03:27 -04:00
comfyanonymous
a9e459c2a4
Use torch.nn.functional.linear in RGB preview code.
...
Add an optional bias to the latent RGB preview code.
2024-09-29 11:27:49 -04:00
comfyanonymous
3bb4dec720
Fix issue with loras, lowvram and --fast fp8.
2024-09-28 14:42:32 -04:00
City
8733191563
Flux torch.compile fix ( #5082 )
2024-09-27 22:07:51 -04:00
doctorpangloss
6ef2d534b6
Fix polling for history too quickly. This will need an alternative approach so that readiness is immediate
2024-09-27 12:46:28 -07:00
doctorpangloss
d25394d386
API now supports fire-and-forget, checking on queue status; prefetch_count now expressly set to 1 for workers
2024-09-27 12:07:54 -07:00
doctorpangloss
a664a1fbc9
Add Flux inpainting model
2024-09-27 12:06:58 -07:00
doctorpangloss
667b77149e
Improve scaling and fit for diffusion
2024-09-26 18:08:34 -07:00
doctorpangloss
dbc8ee92a5
Add method to make this congruent with aio client
2024-09-26 18:08:15 -07:00
doctorpangloss
ab1a1de7a4
Fix missing arg to add_model_folder_path
2024-09-26 13:26:52 -07:00
doctorpangloss
a78f20178d
Fix linking error
2024-09-25 10:16:56 -07:00
doctorpangloss
8f58242c91
Fix frozenset v set issue in folder_paths
2024-09-24 20:36:50 -07:00
comfyanonymous
bdd4a22a2e
Fix flux TE not loading t5 embeddings.
2024-09-24 22:57:22 -04:00
chaObserv
479a427a48
Add dpmpp_2m_cfg_pp ( #4992 )
2024-09-24 02:42:56 -04:00
doctorpangloss
fa3176f96f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-23 12:50:31 -07:00