patientx
3479eb5cb2
Merge branch 'comfyanonymous:master' into master
2024-11-23 22:20:28 +03:00
comfyanonymous
ab885b33ba
Skip layer guidance node now works on LTX-Video.
2024-11-23 10:33:05 -05:00
patientx
63b7bfe64d
Merge branch 'comfyanonymous:master' into master
2024-11-23 13:22:15 +03:00
comfyanonymous
839ed3368e
Some improvements to the lowvram unloading.
2024-11-22 20:59:15 -05:00
patientx
d877db2b77
Merge branch 'comfyanonymous:master' into master
2024-11-23 02:50:32 +03:00
comfyanonymous
6e8cdcd3cb
Fix some tiled VAE decoding issues with LTX-Video.
2024-11-22 18:00:34 -05:00
comfyanonymous
e5c3f4b87f
LTXV lowvram fixes.
2024-11-22 17:17:11 -05:00
comfyanonymous
bc6be6c11e
Some fixes to the lowvram system.
2024-11-22 16:40:04 -05:00
comfyanonymous
5818f6cf51
Remove print.
2024-11-22 10:49:15 -05:00
patientx
396d7cd9d0
Merge branch 'comfyanonymous:master' into master
2024-11-22 18:14:14 +03:00
comfyanonymous
5e16f1d24b
Support Lightricks LTX-Video model.
2024-11-22 08:46:39 -05:00
patientx
7720610a81
Merge branch 'comfyanonymous:master' into master
2024-11-22 10:25:27 +03:00
comfyanonymous
2fd9c1308a
Fix mask issue in some attention functions.
2024-11-22 02:10:09 -05:00
patientx
3652374a46
Merge branch 'comfyanonymous:master' into master
2024-11-22 01:51:35 +03:00
comfyanonymous
8f0009aad0
Support new flux model variants.
2024-11-21 08:38:23 -05:00
patientx
0ec91952c2
Merge branch 'comfyanonymous:master' into master
2024-11-21 15:51:39 +03:00
comfyanonymous
41444b5236
Add some new weight patching functionality.
...
Add a way to reshape lora weights.
Allow weight patches to all weight not just .weight and .bias
Add a way for a lora to set a weight to a specific value.
2024-11-21 07:19:17 -05:00
patientx
16a71e8511
Merge branch 'comfyanonymous:master' into master
2024-11-21 10:44:02 +03:00
comfyanonymous
07f6eeaa13
Fix mask issue with attention_xformers.
2024-11-20 17:07:46 -05:00
patientx
cf71c14e18
Merge branch 'comfyanonymous:master' into master
2024-11-20 15:50:10 +03:00
comfyanonymous
22535d0589
Skip layer guidance now works on stable audio model.
2024-11-20 07:33:06 -05:00
patientx
16d2a4b2e9
Merge branch 'comfyanonymous:master' into master
2024-11-19 13:20:18 +03:00
comfyanonymous
b699a15062
Refactor inpaint/ip2p code.
2024-11-19 03:25:25 -05:00
patientx
b966e670b2
Merge branch 'comfyanonymous:master' into master
2024-11-17 16:39:36 +03:00
comfyanonymous
d9f90965c8
Support block replace patches in auraflow.
2024-11-17 08:19:59 -05:00
patientx
56ee9e4e7f
Merge branch 'comfyanonymous:master' into master
2024-11-17 15:26:50 +03:00
comfyanonymous
41886af138
Add transformer options blocks replace patch to mochi.
2024-11-16 20:48:14 -05:00
patientx
45d8f6570b
Merge branch 'comfyanonymous:master' into master
2024-11-13 16:39:33 +03:00
comfyanonymous
3b9a6cf2b1
Fix issue with 3d masks.
2024-11-13 07:18:30 -05:00
patientx
10a97b0bb8
Merge branch 'comfyanonymous:master' into master
2024-11-12 20:22:47 +03:00
comfyanonymous
8ebf2d8831
Add block replace transformer_options to flux.
2024-11-12 08:00:39 -05:00
patientx
f6bd5cbac0
Merge branch 'comfyanonymous:master' into master
2024-11-11 23:56:06 +03:00
comfyanonymous
eb476e6ea9
Allow 1D masks for 1D latents.
2024-11-11 14:44:52 -05:00
patientx
96bccdce39
Merge branch 'comfyanonymous:master' into master
2024-11-11 13:42:59 +03:00
comfyanonymous
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
comfyanonymous
2a18e98ccf
Refactor so that zsnr can be set in the sampling_settings.
2024-11-11 04:55:56 -05:00
patientx
0e27132a6b
Merge branch 'comfyanonymous:master' into master
2024-11-10 14:18:31 +03:00
comfyanonymous
bdeb1c171c
Fast previews for mochi.
2024-11-10 03:39:35 -05:00
patientx
5e34d323e7
Merge branch 'comfyanonymous:master' into master
2024-11-09 23:54:25 +03:00
comfyanonymous
8b90e50979
Properly handle and reshape masks when used on 3d latents.
2024-11-09 15:30:19 -05:00
patientx
0a02622727
Merge branch 'comfyanonymous:master' into master
2024-11-07 17:20:02 +03:00
comfyanonymous
2865f913f7
Free memory before doing tiled decode.
2024-11-07 04:01:24 -05:00
comfyanonymous
b49616f951
Make VAEDecodeTiled node work with video VAEs.
2024-11-07 03:47:12 -05:00
patientx
79efdb2f49
Merge branch 'comfyanonymous:master' into master
2024-11-06 14:07:38 +03:00
comfyanonymous
5e29e7a488
Remove scaled_fp8 key after reading it to silence warning.
2024-11-06 04:56:42 -05:00
patientx
c47638883b
Merge branch 'comfyanonymous:master' into master
2024-11-05 13:28:41 +03:00
comfyanonymous
8afb97cd3f
Fix unknown VAE being detected as the mochi VAE.
2024-11-05 03:43:27 -05:00
patientx
b6549b7520
Merge branch 'comfyanonymous:master' into master
2024-11-04 23:34:40 +03:00
contentis
69694f40b3
fix dynamic shape export ( #5490 )
2024-11-04 14:59:28 -05:00
patientx
59e2c35d29
Merge branch 'comfyanonymous:master' into master
2024-11-03 12:27:43 +03:00
comfyanonymous
6c9dbde7de
Fix mochi all in one checkpoint t5xxl key names.
2024-11-03 01:40:42 -05:00
patientx
cd78a77c63
Merge branch 'comfyanonymous:master' into master
2024-11-02 12:52:55 +03:00
comfyanonymous
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
patientx
4526136e42
Merge branch 'comfyanonymous:master' into master
2024-11-01 13:16:45 +03:00
Aarni Koskela
1c8286a44b
Avoid SyntaxWarning in UniPC docstring ( #5442 )
2024-10-31 15:17:26 -04:00
comfyanonymous
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
patientx
ae77a32cbb
Merge branch 'comfyanonymous:master' into master
2024-10-31 00:21:29 +03:00
comfyanonymous
daa1565b93
Fix diffusers flux controlnet regression.
2024-10-30 13:11:34 -04:00
patientx
29e7e35ac6
Merge branch 'comfyanonymous:master' into master
2024-10-30 11:58:03 +03:00
comfyanonymous
09fdb2b269
Support SD3.5 medium diffusers format weights and loras.
2024-10-30 04:24:00 -04:00
patientx
587b27ff26
Merge branch 'comfyanonymous:master' into master
2024-10-29 10:26:41 +03:00
comfyanonymous
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
comfyanonymous
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
patientx
0c5e42230d
Merge branch 'comfyanonymous:master' into master
2024-10-29 01:01:45 +03:00
comfyanonymous
c320801187
Remove useless line.
2024-10-28 17:41:12 -04:00
patientx
7b1f4c1094
Merge branch 'comfyanonymous:master' into master
2024-10-28 10:57:25 +03:00
comfyanonymous
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
patientx
b712d0a718
Merge branch 'comfyanonymous:master' into master
2024-10-27 13:14:51 +03:00
comfyanonymous
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
patientx
0575f60ac4
Merge branch 'comfyanonymous:master' into master
2024-10-26 15:10:27 +03:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
c3ffbae067
Make LatentUpscale nodes work on 3d latents.
2024-10-26 01:50:51 -04:00
comfyanonymous
d605677b33
Make euler_ancestral work on flow models (credit: Ashen).
2024-10-25 19:53:44 -04:00
patientx
6a9115d7ea
Merge branch 'comfyanonymous:master' into master
2024-10-24 11:06:56 +03:00
PsychoLogicAu
af8cf79a2d
support SimpleTuner lycoris lora for SD3 ( #5340 )
2024-10-24 01:18:32 -04:00
patientx
bb7e9ef812
Merge branch 'comfyanonymous:master' into master
2024-10-24 00:06:13 +03:00
comfyanonymous
66b0961a46
Fix ControlLora issue with last commit.
2024-10-23 17:02:40 -04:00
patientx
ca700c7638
Merge branch 'comfyanonymous:master' into master
2024-10-23 23:58:26 +03:00
comfyanonymous
754597c8a9
Clean up some controlnet code.
...
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
patientx
fd143ca944
Merge branch 'comfyanonymous:master' into master
2024-10-23 00:19:39 +03:00
comfyanonymous
915fdb5745
Fix lowvram edge case.
2024-10-22 16:34:50 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
patientx
f0e8767deb
Merge branch 'comfyanonymous:master' into master
2024-10-22 13:39:15 +03:00
comfyanonymous
8ce2a1052c
Optimizations to --fast and scaled fp8.
2024-10-22 02:12:28 -04:00
comfyanonymous
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
patientx
9fd46200ab
Merge branch 'comfyanonymous:master' into master
2024-10-21 12:23:49 +03:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
patientx
5a425aeda1
Merge branch 'comfyanonymous:master' into master
2024-10-20 21:06:24 +03:00
comfyanonymous
f9f9faface
Fixed model merging issue with scaled fp8.
2024-10-20 06:24:31 -04:00
patientx
1678ea8f9c
Merge branch 'comfyanonymous:master' into master
2024-10-20 10:20:06 +03:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
comfyanonymous
73e3a9e676
Clamp output when rounding weight to prevent Nan.
2024-10-19 19:07:10 -04:00
patientx
d4b509799f
Merge branch 'comfyanonymous:master' into master
2024-10-18 11:14:20 +03:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
patientx
fc4acf26c3
Merge branch 'comfyanonymous:master' into master
2024-10-16 23:54:39 +03:00
comfyanonymous
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe .
2024-10-16 12:36:19 -04:00
patientx
f143a803d6
Merge branch 'comfyanonymous:master' into master
2024-10-15 09:55:21 +03:00
comfyanonymous
f584758271
Cleanup some useless lines.
2024-10-14 21:02:39 -04:00
svdc
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
patientx
a5e3eae103
Merge branch 'comfyanonymous:master' into master
2024-10-12 23:00:55 +03:00
comfyanonymous
3c60ecd7a8
Fix fp8 ops staying enabled.
2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
patientx
eae1c15ab1
Merge branch 'comfyanonymous:master' into master
2024-10-12 11:02:28 +03:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c ( #5210 )
2024-10-11 20:37:50 -04:00
patientx
e4d24788f1
Merge branch 'comfyanonymous:master' into master
2024-10-11 00:22:45 +03:00
comfyanonymous
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
patientx
f9eab05f54
Merge branch 'comfyanonymous:master' into master
2024-10-10 10:30:17 +03:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
patientx
21d9bef158
Merge branch 'comfyanonymous:master' into master
2024-10-09 11:22:05 +03:00
comfyanonymous
203942c8b2
Fix flux doras with diffusers keys.
2024-10-08 19:03:40 -04:00
patientx
df174b7cbd
Merge branch 'comfyanonymous:master' into master
2024-10-07 22:22:38 +03:00
comfyanonymous
8dfa0cc552
Make SD3 fast previews a little better.
2024-10-07 09:19:59 -04:00
patientx
445390eae2
Merge branch 'comfyanonymous:master' into master
2024-10-07 10:15:48 +03:00
comfyanonymous
e5ecdfdd2d
Make fast previews for SDXL a little better by adding a bias.
2024-10-06 19:27:04 -04:00
comfyanonymous
7d29fbf74b
Slightly improve the fast previews for flux by adding a bias.
2024-10-06 17:55:46 -04:00
patientx
dfa888ea5e
Merge branch 'comfyanonymous:master' into master
2024-10-05 21:09:30 +03:00
comfyanonymous
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
patientx
613a320875
Merge branch 'comfyanonymous:master' into master
2024-10-05 00:01:10 +03:00
comfyanonymous
6f021d8aa0
Let --verbose have an argument for the log level.
2024-10-04 10:05:34 -04:00
patientx
9ec6b86747
Merge branch 'comfyanonymous:master' into master
2024-10-03 22:16:09 +03:00
comfyanonymous
d854ed0bcf
Allow using SD3 type te output on flux model.
2024-10-03 09:44:54 -04:00
comfyanonymous
abcd006b8c
Allow more permutations of clip/t5 in dual clip loader.
2024-10-03 09:26:11 -04:00
patientx
4c9c09b888
Merge branch 'comfyanonymous:master' into master
2024-10-02 13:35:22 +03:00
comfyanonymous
d985d1d7dc
CLIP Loader node now supports clip_l and clip_g only for SD3.
2024-10-02 04:25:17 -04:00
patientx
8e0250f3d1
Merge branch 'comfyanonymous:master' into master
2024-10-01 19:40:37 +03:00
comfyanonymous
d1cdf51e1b
Refactor some of the TE detection code.
2024-10-01 07:08:41 -04:00
patientx
305c31142b
Merge branch 'comfyanonymous:master' into master
2024-10-01 11:55:23 +03:00
comfyanonymous
b4626ab93e
Add simpletuner lycoris format for SD unet.
2024-09-30 06:03:27 -04:00
patientx
fa20ab0287
Merge branch 'comfyanonymous:master' into master
2024-09-29 23:53:36 +03:00
comfyanonymous
a9e459c2a4
Use torch.nn.functional.linear in RGB preview code.
...
Add an optional bias to the latent RGB preview code.
2024-09-29 11:27:49 -04:00
patientx
9a0f99ba71
Merge branch 'comfyanonymous:master' into master
2024-09-28 22:28:25 +03:00
comfyanonymous
3bb4dec720
Fix issue with loras, lowvram and --fast fp8.
2024-09-28 14:42:32 -04:00
City
8733191563
Flux torch.compile fix ( #5082 )
2024-09-27 22:07:51 -04:00
patientx
d82af09346
Merge branch 'comfyanonymous:master' into master
2024-09-25 09:59:21 +03:00
comfyanonymous
bdd4a22a2e
Fix flux TE not loading t5 embeddings.
2024-09-24 22:57:22 -04:00
patientx
513f7cfda0
Merge branch 'comfyanonymous:master' into master
2024-09-24 10:58:58 +03:00
chaObserv
479a427a48
Add dpmpp_2m_cfg_pp ( #4992 )
2024-09-24 02:42:56 -04:00
patientx
ba87171925
Merge branch 'comfyanonymous:master' into master
2024-09-23 13:49:57 +03:00
comfyanonymous
3a0eeee320
Make --listen listen on both ipv4 and ipv6 at the same time by default.
2024-09-23 04:38:19 -04:00
patientx
6a1e215ad2
Merge branch 'comfyanonymous:master' into master
2024-09-23 10:07:44 +03:00
comfyanonymous
9c41bc8d10
Remove useless line.
2024-09-23 02:32:29 -04:00
patientx
63c21e0bfa
Merge branch 'comfyanonymous:master' into master
2024-09-22 13:35:44 +03:00
comfyanonymous
7a415f47a9
Add an optional VAE input to the ControlNetApplyAdvanced node.
...
Deprecate the other controlnet nodes.
2024-09-22 01:24:52 -04:00
patientx
ae611c9b61
Merge branch 'comfyanonymous:master' into master
2024-09-21 13:51:37 +03:00
comfyanonymous
dc96a1ae19
Load controlnet in fp8 if weights are in fp8.
2024-09-21 04:50:12 -04:00
comfyanonymous
2d810b081e
Add load_controlnet_state_dict function.
2024-09-21 01:51:51 -04:00
comfyanonymous
9f7e9f0547
Add an error message when a controlnet needs a VAE but none is given.
2024-09-21 01:33:18 -04:00
patientx
d3c8252c48
Merge branch 'comfyanonymous:master' into master
2024-09-20 10:14:16 +03:00
comfyanonymous
70a708d726
Fix model merging issue.
2024-09-20 02:31:44 -04:00
yoinked
e7d4782736
add laplace scheduler [2407.03297] ( #4990 )
...
* add laplace scheduler [2407.03297]
* should be here instead lol
* better settings
2024-09-19 23:23:09 -04:00
patientx
9e8686df8d
Merge branch 'comfyanonymous:master' into master
2024-09-19 19:57:21 +03:00
comfyanonymous
ad66f7c7d8
Add model_options to load_controlnet function.
2024-09-19 08:23:35 -04:00
Simon Lui
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. ( #4978 )
2024-09-19 05:11:42 -04:00
patientx
4c62b6d8f0
Merge branch 'comfyanonymous:master' into master
2024-09-17 11:02:10 +03:00
pharmapsychotic
0b7dfa986d
Improve tiling calculations to reduce number of tiles that need to be processed. ( #4944 )
2024-09-17 03:51:10 -04:00
comfyanonymous
d514bb38ee
Add some option to model_options for the text encoder.
...
load_device, offload_device and the initial_device can now be set.
2024-09-17 03:49:54 -04:00
comfyanonymous
0849c80e2a
get_key_patches now works without unloading the model.
2024-09-17 01:57:59 -04:00
patientx
8bcf408c79
Merge branch 'comfyanonymous:master' into master
2024-09-15 16:41:23 +03:00
comfyanonymous
e813abbb2c
Long CLIP L support for SDXL, SD3 and Flux.
...
Use the *CLIPLoader nodes.
2024-09-15 07:59:38 -04:00
patientx
b9a24c0146
Merge branch 'comfyanonymous:master' into master
2024-09-14 16:14:57 +03:00
comfyanonymous
f48e390032
Support AliMama SD3 and Flux inpaint controlnets.
...
Use the ControlNetInpaintingAliMamaApply node.
2024-09-14 09:05:16 -04:00
patientx
2710ea1aa2
Merge branch 'comfyanonymous:master' into master
2024-09-13 23:07:04 +03:00
comfyanonymous
cf80d28689
Support loading controlnets with different input.
2024-09-13 09:54:37 -04:00
patientx
8ca077b268
Merge branch 'comfyanonymous:master' into master
2024-09-12 16:48:54 +03:00
Robin Huang
b962db9952
Add cli arg to override user directory ( #4856 )
...
* Override user directory.
* Use overridden user directory.
* Remove prints.
* Remove references to global user_files.
* Remove unused replace_folder function.
* Remove newline.
* Remove global during get_user_directory.
* Add validation.
2024-09-12 08:10:27 -04:00
patientx
73c31987c6
Merge branch 'comfyanonymous:master' into master
2024-09-12 12:44:27 +03:00
comfyanonymous
9d720187f1
types -> comfy_types to fix import issue.
2024-09-12 03:57:46 -04:00
patientx
8eb7ca051a
Merge branch 'comfyanonymous:master' into master
2024-09-11 10:26:46 +03:00
comfyanonymous
9f4daca9d9
Doesn't really make sense for cfg_pp sampler to call regular one.
2024-09-11 02:51:36 -04:00
yoinked
b5d0f2a908
Add CFG++ to DPM++ 2S Ancestral ( #3871 )
...
* Update sampling.py
* Update samplers.py
* my bad
* "fix" the sampler
* Update samplers.py
* i named it wrong
* minor sampling improvements
mainly using a dynamic rho value (hey this sounds a lot like smea!!!)
* revert rho change
rho? r? its just 1/2
2024-09-11 02:49:44 -04:00
patientx
c4e18b7206
Merge branch 'comfyanonymous:master' into master
2024-09-08 22:20:50 +03:00
comfyanonymous
9c5fca75f4
Fix lora issue.
2024-09-08 10:10:47 -04:00
comfyanonymous
32a60a7bac
Support onetrainer text encoder Flux lora.
2024-09-08 09:31:41 -04:00
patientx
52f858d715
Merge branch 'comfyanonymous:master' into master
2024-09-07 14:47:35 +03:00
Jim Winkens
bb52934ba4
Fix import issue ( #4815 )
2024-09-07 05:28:32 -04:00
patientx
962638c9dc
Merge branch 'comfyanonymous:master' into master
2024-09-07 11:04:57 +03:00
comfyanonymous
ea77750759
Support a generic Comfy format for text encoder loras.
...
This is a format with keys like:
text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj.lora_up.weight
Instead of waiting for me to add support for specific lora formats you can
convert your text encoder loras to this format instead.
If you want to see an example save a text encoder lora with the SaveLora
node with the commit right after this one.
2024-09-07 02:20:39 -04:00
patientx
bc054d012b
Merge branch 'comfyanonymous:master' into master
2024-09-06 10:58:13 +03:00
comfyanonymous
c27ebeb1c2
Fix onnx export not working on flux.
2024-09-06 03:21:52 -04:00
patientx
6fdbaf1a76
Merge branch 'comfyanonymous:master' into master
2024-09-05 12:04:05 +03:00
comfyanonymous
5cbaa9e07c
Mistoline flux controlnet support.
2024-09-05 00:05:17 -04:00
comfyanonymous
c7427375ee
Prioritize freeing partially offloaded models first.
2024-09-04 19:47:32 -04:00
patientx
894c727ce2
Update model_management.py
2024-09-05 00:05:54 +03:00
patientx
b518390241
Merge branch 'comfyanonymous:master' into master
2024-09-04 22:36:12 +03:00
Jedrzej Kosinski
f04229b84d
Add emb_patch support to UNetModel forward ( #4779 )
2024-09-04 14:35:15 -04:00
patientx
64f428801e
Merge branch 'comfyanonymous:master' into master
2024-09-04 09:29:56 +03:00
Silver
f067ad15d1
Make live preview size a configurable launch argument ( #4649 )
...
* Make live preview size a configurable launch argument
* Remove import from testing phase
* Update cli_args.py
2024-09-03 19:16:38 -04:00
comfyanonymous
483004dd1d
Support newer glora format.
2024-09-03 17:02:19 -04:00
patientx
88ccc8f3a5
Merge branch 'comfyanonymous:master' into master
2024-09-03 11:01:28 +03:00
comfyanonymous
00a5d08103
Lower fp8 lora memory usage.
2024-09-03 01:25:05 -04:00
patientx
f2122a355b
Merge branch 'comfyanonymous:master' into master
2024-09-02 16:06:23 +03:00
comfyanonymous
d043997d30
Flux onetrainer lora.
2024-09-02 08:22:15 -04:00
patientx
93fa5c9ebb
Merge branch 'comfyanonymous:master' into master
2024-09-02 10:03:48 +03:00
comfyanonymous
8d31a6632f
Speed up inference on nvidia 10 series on Linux.
2024-09-01 17:29:31 -04:00
patientx
f02c0d3ed9
Merge branch 'comfyanonymous:master' into master
2024-09-01 14:34:56 +03:00
comfyanonymous
b643eae08b
Make minimum_inference_memory() depend on --reserve-vram
2024-09-01 01:18:34 -04:00
patientx
acc3d6a2ea
Update model_management.py
2024-08-30 20:13:28 +03:00
patientx
51af2440ef
Update model_management.py
2024-08-30 20:10:47 +03:00
patientx
3e226f02f3
Update model_management.py
2024-08-30 20:08:18 +03:00
comfyanonymous
935ae153e1
Cleanup.
2024-08-30 12:53:59 -04:00
patientx
aeab6d1370
Merge branch 'comfyanonymous:master' into master
2024-08-30 19:49:03 +03:00
Chenlei Hu
e91662e784
Get logs endpoint & system_stats additions ( #4690 )
...
* Add route for getting output logs
* Include ComfyUI version
* Move to own function
* Changed to memory logger
* Unify logger setup logic
* Fix get version git fallback
---------
Co-authored-by: pythongosssss <125205205+pythongosssss@users.noreply.github.com>
2024-08-30 12:46:37 -04:00
patientx
d8c04b9022
Merge branch 'comfyanonymous:master' into master
2024-08-30 19:42:36 +03:00
patientx
524cd140b5
removed bfloat from flux model support, resulting in 2x speedup
2024-08-30 13:33:32 +03:00
patientx
a8652a052f
Merge branch 'comfyanonymous:master' into master
2024-08-30 12:14:01 +03:00
comfyanonymous
63fafaef45
Fix potential issue with hydit controlnets.
2024-08-30 04:58:41 -04:00
comfyanonymous
6eb5d64522
Fix glora lowvram issue.
2024-08-29 19:07:23 -04:00
comfyanonymous
10a79e9898
Implement model part of flux union controlnet.
2024-08-29 18:41:22 -04:00
patientx
a110c83af7
Merge branch 'comfyanonymous:master' into master
2024-08-29 20:51:26 +03:00
patientx
02c34de8b1
Merge branch 'comfyanonymous:master' into master
2024-08-29 11:55:26 +03:00
comfyanonymous
ea3f39bd69
InstantX depth flux controlnet.
2024-08-29 02:14:19 -04:00
comfyanonymous
b33cd61070
InstantX canny controlnet.
2024-08-28 19:02:50 -04:00
patientx
39c3ef9d66
Merge branch 'comfyanonymous:master' into master
2024-08-29 01:34:07 +03:00
comfyanonymous
d31e226650
Unify RMSNorm code.
2024-08-28 16:56:38 -04:00
patientx
7056a6aa6f
Merge branch 'comfyanonymous:master' into master
2024-08-28 09:36:30 +03:00
comfyanonymous
38c22e631a
Fix case where model was not properly unloaded in merging workflows.
2024-08-27 19:03:51 -04:00
patientx
bdd77b243b
Merge branch 'comfyanonymous:master' into master
2024-08-27 21:29:27 +03:00
Chenlei Hu
6bbdcd28ae
Support weight padding on diff weight patch ( #4576 )
2024-08-27 13:55:37 -04:00
patientx
2feaa21954
Merge branch 'comfyanonymous:master' into master
2024-08-27 20:24:31 +03:00
comfyanonymous
ab130001a8
Do RMSNorm in native type.
2024-08-27 02:41:56 -04:00
patientx
4193c15afe
Merge branch 'comfyanonymous:master' into master
2024-08-26 22:56:02 +03:00
comfyanonymous
2ca8f6e23d
Make the stochastic fp8 rounding reproducible.
2024-08-26 15:12:06 -04:00
comfyanonymous
7985ff88b9
Use less memory in float8 lora patching by doing calculations in fp16.
2024-08-26 14:45:58 -04:00
patientx
58594a0b47
Merge branch 'comfyanonymous:master' into master
2024-08-26 14:29:57 +03:00
comfyanonymous
c6812947e9
Fix potential memory leak.
2024-08-26 02:07:32 -04:00
patientx
902d97af7d
Merge branch 'comfyanonymous:master' into master
2024-08-25 23:35:11 +03:00
comfyanonymous
9230f65823
Fix some controlnets OOMing when loading.
2024-08-25 05:54:29 -04:00
patientx
c60a87396b
Merge branch 'comfyanonymous:master' into master
2024-08-24 11:31:17 +03:00
comfyanonymous
8ae23d8e80
Fix onnx export.
2024-08-23 17:52:47 -04:00
patientx
134569ea48
Update model_management.py
2024-08-23 14:10:09 +03:00
patientx
c98e8a0a55
Merge branch 'comfyanonymous:master' into master
2024-08-23 12:31:51 +03:00
comfyanonymous
7df42b9a23
Fix dora.
2024-08-23 04:58:59 -04:00
comfyanonymous
5d8bbb7281
Cleanup.
2024-08-23 04:06:27 -04:00
patientx
9f87d61bfe
Merge branch 'comfyanonymous:master' into master
2024-08-23 11:04:56 +03:00
comfyanonymous
2c1d2375d6
Fix.
2024-08-23 04:04:55 -04:00
Simon Lui
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). ( #4562 )
2024-08-23 03:59:57 -04:00
Scorpinaus
9465b23432
Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function ( #4565 )
2024-08-23 03:57:08 -04:00
patientx
1ef90b7ac8
Merge branch 'comfyanonymous:master' into master
2024-08-23 00:55:19 +03:00
comfyanonymous
c0b0da264b
Missing imports.
2024-08-22 17:20:51 -04:00
comfyanonymous
c26ca27207
Move calculate function to comfy.lora
2024-08-22 17:12:00 -04:00
comfyanonymous
7c6bb84016
Code cleanups.
2024-08-22 17:05:12 -04:00
patientx
dec75f11e4
Merge branch 'comfyanonymous:master' into master
2024-08-22 23:36:58 +03:00
comfyanonymous
c54d3ed5e6
Fix issue with models staying loaded in memory.
2024-08-22 15:58:20 -04:00
comfyanonymous
c7ee4b37a1
Try to fix some lora issues.
2024-08-22 15:32:18 -04:00
David
7b70b266d8
Generalize MacOS version check for force-upcast-attention ( #4548 )
...
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.
I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.
See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
comfyanonymous
8f60d093ba
Fix issue.
2024-08-22 10:38:24 -04:00
patientx
0cd8a740bb
Merge branch 'comfyanonymous:master' into master
2024-08-22 14:01:42 +03:00
comfyanonymous
843a7ff70c
fp16 is actually faster than fp32 on a GTX 1080.
2024-08-21 23:23:50 -04:00
patientx
febf8601dc
Merge branch 'comfyanonymous:master' into master
2024-08-22 00:07:14 +03:00
comfyanonymous
a60620dcea
Fix slow performance on 10 series Nvidia GPUs.
2024-08-21 16:39:02 -04:00
comfyanonymous
015f73dc49
Try a different type of flux fp16 fix.
2024-08-21 16:17:15 -04:00
comfyanonymous
904bf58e7d
Make --fast work on pytorch nightly.
2024-08-21 14:01:41 -04:00
patientx
0774774bb9
Merge branch 'comfyanonymous:master' into master
2024-08-21 19:19:41 +03:00
Svein Ove Aas
5f50263088
Replace use of .view with .reshape ( #4522 )
...
When generating images with fp8_e4_m3 Flux and batch size >1, using --fast, ComfyUI throws a "view size is not compatible with input tensor's size and stride" error pointing at the first of these two calls to view.
As reshape is semantically equivalent to view except for working on a broader set of inputs, there should be no downside to changing this. The only difference is that it clones the underlying data in cases where .view would error out. I have confirmed that the output still looks as expected, but cannot confirm that no mutable use is made of the tensors anywhere.
Note that --fast is only marginally faster than the default.
2024-08-21 11:21:48 -04:00
patientx
ac75d4e4e0
Merge branch 'comfyanonymous:master' into master
2024-08-21 09:49:29 +03:00
comfyanonymous
03ec517afb
Remove useless line, adjust windows default reserved vram.
2024-08-21 00:47:19 -04:00
comfyanonymous
510f3438c1
Speed up fp8 matrix mult by using better code.
2024-08-20 22:53:26 -04:00
patientx
5656b5b956
Merge branch 'comfyanonymous:master' into master
2024-08-20 23:07:54 +03:00
comfyanonymous
ea63b1c092
Simpletrainer lycoris format.
2024-08-20 12:05:13 -04:00
comfyanonymous
9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
comfyanonymous
d1a6bd6845
Support loading long clipl model with the CLIP loader node.
2024-08-20 10:46:36 -04:00
comfyanonymous
83dbac28eb
Properly set if clip text pooled projection instead of using hack.
2024-08-20 10:46:36 -04:00
comfyanonymous
538cb068bc
Make cast_to a nop if weight is already good.
2024-08-20 10:46:36 -04:00
comfyanonymous
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
patientx
9727da93ea
Merge branch 'comfyanonymous:master' into master
2024-08-20 12:35:06 +03:00
comfyanonymous
9eee470244
New load_text_encoder_state_dicts function.
...
Now you can load text encoders straight from a list of state dicts.
2024-08-19 17:36:35 -04:00
patientx
b20f5b1e32
Merge branch 'comfyanonymous:master' into master
2024-08-20 00:31:41 +03:00
comfyanonymous
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
comfyanonymous
4d341b78e8
Bug fixes.
2024-08-19 16:28:55 -04:00
patientx
9baf36e97b
Merge branch 'comfyanonymous:master' into master
2024-08-19 22:54:45 +03:00
comfyanonymous
6138f92084
Use better dtype for the lowvram lora system.
2024-08-19 15:35:25 -04:00
comfyanonymous
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
comfyanonymous
4506ddc86a
Better subnormal fp8 stochastic rounding. Thanks Ashen.
2024-08-19 13:38:03 -04:00
comfyanonymous
20ace7c853
Code cleanup.
2024-08-19 12:48:59 -04:00
patientx
74c8545fa6
Merge branch 'comfyanonymous:master' into master
2024-08-19 17:20:41 +03:00
comfyanonymous
22ec02afc0
Handle subnormal numbers in float8 rounding.
2024-08-19 05:51:08 -04:00
patientx
eb8d7f86d1
Merge branch 'comfyanonymous:master' into master
2024-08-19 10:02:52 +03:00
comfyanonymous
39f114c44b
Less broken non blocking?
2024-08-18 16:53:17 -04:00
patientx
a09513099a
Merge branch 'comfyanonymous:master' into master
2024-08-18 23:49:55 +03:00
comfyanonymous
6730f3e1a3
Disable non blocking.
...
It fixed some perf issues but caused other issues that need to be debugged.
2024-08-18 14:38:09 -04:00
comfyanonymous
73332160c8
Enable non blocking transfers in lowvram mode.
2024-08-18 10:29:33 -04:00
patientx
b397f02669
Merge branch 'comfyanonymous:master' into master
2024-08-18 11:33:07 +03:00
comfyanonymous
2622c55aff
Automatically use RF variant of dpmpp_2s_ancestral if RF model.
2024-08-18 00:47:25 -04:00
Ashen
1beb348ee2
dpmpp_2s_ancestral_RF for rectified flow (Flux, SD3 and Auraflow).
2024-08-18 00:33:30 -04:00
comfyanonymous
d31df04c8a
Indentation.
2024-08-17 23:00:44 -04:00
Xrvk
e68763f40c
Add Flux model support for InstantX style controlnet residuals ( #4444 )
...
* Add Flux model support for InstantX style controlnet residuals
* Refactor Flux controlnet residual step to a separate method
* Rollback minor change
* New format for applying controlnet residuals: input->double_blocks, output->single_blocks
* Adjust XLabs Flux controlnet to fit new syntax of applying Flux controlnet residuals
* Remove unnecessary import and minor style change
2024-08-17 22:58:23 -04:00
comfyanonymous
4f7a3cb6fb
unet -> diffusion_models.
2024-08-17 21:31:04 -04:00
patientx
cd0aeb046f
Merge branch 'comfyanonymous:master' into master
2024-08-18 00:12:31 +03:00
comfyanonymous
bb222ceddb
Fix loras having a weak effect when applied on fp8.
2024-08-17 15:20:17 -04:00
comfyanonymous
fca42836f2
Add model_options for text encoder.
2024-08-17 11:17:20 -04:00
patientx
135782b12d
Merge branch 'comfyanonymous:master' into master
2024-08-17 14:28:10 +03:00
comfyanonymous
cd5017c1c9
calculate_weight function to use a different dtype.
2024-08-17 01:06:08 -04:00
comfyanonymous
83f343146a
Fix potential lowvram issue.
2024-08-16 17:12:42 -04:00
patientx
a0daedce41
Merge branch 'comfyanonymous:master' into master
2024-08-16 23:16:25 +03:00
Matthew Turnshek
1770fc77ed
Implement support for taef1 latent previews ( #4409 )
...
* add taef1 handling to several places
* remove guess_latent_channels and add latent_channels info directly to flux model
* remove TODO
* fix numbers
2024-08-16 12:53:13 -04:00
patientx
7bd86b0896
Update model_management.py
2024-08-16 00:53:38 +03:00
comfyanonymous
5960f946a9
Move a few files from comfy -> comfy_execution.
...
Python code in the comfy folder should not import things from outside it.
2024-08-15 11:21:14 -04:00
guill
5cfe38f41c
Execution Model Inversion ( #2666 )
...
* Execution Model Inversion
This PR inverts the execution model -- from recursively calling nodes to
using a topological sort of the nodes. This change allows for
modification of the node graph during execution. This allows for two
major advantages:
1. The implementation of lazy evaluation in nodes. For example, if a
"Mix Images" node has a mix factor of exactly 0.0, the second image
input doesn't even need to be evaluated (and visa-versa if the mix
factor is 1.0).
2. Dynamic expansion of nodes. This allows for the creation of dynamic
"node groups". Specifically, custom nodes can return subgraphs that
replace the original node in the graph. This is an incredibly
powerful concept. Using this functionality, it was easy to
implement:
a. Components (a.k.a. node groups)
b. Flow control (i.e. while loops) via tail recursion
c. All-in-one nodes that replicate the WebUI functionality
d. and more
All of those were able to be implemented entirely via custom nodes,
so those features are *not* a part of this PR. (There are some
front-end changes that should occur before that functionality is
made widely available, particularly around variant sockets.)
The custom nodes associated with this PR can be found at:
https://github.com/BadCafeCode/execution-inversion-demo-comfyui
Note that some of them require that variant socket types ("*") be
enabled.
* Allow `input_info` to be of type `None`
* Handle errors (like OOM) more gracefully
* Add a command-line argument to enable variants
This allows the use of nodes that have sockets of type '*' without
applying a patch to the code.
* Fix an overly aggressive assertion.
This could happen when attempting to evaluate `IS_CHANGED` for a node
during the creation of the cache (in order to create the cache key).
* Fix Pyright warnings
* Add execution model unit tests
* Fix issue with unused literals
Behavior should now match the master branch with regard to undeclared
inputs. Undeclared inputs that are socket connections will be used while
undeclared inputs that are literals will be ignored.
* Make custom VALIDATE_INPUTS skip normal validation
Additionally, if `VALIDATE_INPUTS` takes an argument named `input_types`,
that variable will be a dictionary of the socket type of all incoming
connections. If that argument exists, normal socket type validation will
not occur. This removes the last hurdle for enabling variant types
entirely from custom nodes, so I've removed that command-line option.
I've added appropriate unit tests for these changes.
* Fix example in unit test
This wouldn't have caused any issues in the unit test, but it would have
bugged the UI if someone copy+pasted it into their own node pack.
* Use fstrings instead of '%' formatting syntax
* Use custom exception types.
* Display an error for dependency cycles
Previously, dependency cycles that were created during node expansion
would cause the application to quit (due to an uncaught exception). Now,
we'll throw a proper error to the UI. We also make an attempt to 'blame'
the most relevant node in the UI.
* Add docs on when ExecutionBlocker should be used
* Remove unused functionality
* Rename ExecutionResult.SLEEPING to PENDING
* Remove superfluous function parameter
* Pass None for uneval inputs instead of default
This applies to `VALIDATE_INPUTS`, `check_lazy_status`, and lazy values
in evaluation functions.
* Add a test for mixed node expansion
This test ensures that a node that returns a combination of expanded
subgraphs and literal values functions correctly.
* Raise exception for bad get_node calls.
* Minor refactor of IsChangedCache.get
* Refactor `map_node_over_list` function
* Fix ui output for duplicated nodes
* Add documentation on `check_lazy_status`
* Add file for execution model unit tests
* Clean up Javascript code as per review
* Improve documentation
Converted some comments to docstrings as per review
* Add a new unit test for mixed lazy results
This test validates that when an output list is fed to a lazy node, the
node will properly evaluate previous nodes that are needed by any inputs
to the lazy node.
No code in the execution model has been changed. The test already
passes.
* Allow kwargs in VALIDATE_INPUTS functions
When kwargs are used, validation is skipped for all inputs as if they
had been mentioned explicitly.
* List cached nodes in `execution_cached` message
This was previously just bugged in this PR.
2024-08-15 11:21:11 -04:00
comfyanonymous
0f9c2a7822
Try to fix SDXL OOM issue on some configurations.
2024-08-14 23:08:54 -04:00
comfyanonymous
f1d6cef71c
Revert "Disable cuda malloc by default."
...
This reverts commit 50bf66e5c4 .
2024-08-14 08:38:07 -04:00
comfyanonymous
33fb282d5c
Fix issue.
2024-08-14 02:51:47 -04:00
comfyanonymous
50bf66e5c4
Disable cuda malloc by default.
2024-08-14 02:49:25 -04:00
comfyanonymous
a5af64d3ce
Revert "Not sure if this actually changes anything but it can't hurt."
...
This reverts commit 34608de2e9 .
2024-08-14 01:05:17 -04:00
comfyanonymous
34608de2e9
Not sure if this actually changes anything but it can't hurt.
2024-08-13 13:29:16 -04:00
comfyanonymous
39fb74c5bd
Fix bug when model cannot be partially unloaded.
2024-08-13 03:57:55 -04:00
comfyanonymous
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
comfyanonymous
a562c17e8a
load_unet -> load_diffusion_model with a model_options argument.
2024-08-12 23:20:57 -04:00
comfyanonymous
5942c17d55
Order of operations matters.
2024-08-12 21:56:18 -04:00
comfyanonymous
c032b11e07
xlabs Flux controlnet implementation. ( #4260 )
...
* xlabs Flux controlnet.
* Fix not working on old python.
* Remove comment.
2024-08-12 21:22:22 -04:00
comfyanonymous
b8ffb2937f
Memory tweaks.
2024-08-12 15:07:11 -04:00
comfyanonymous
5d43e75e5b
Fix some issues with the model sometimes not getting patched.
2024-08-12 12:27:54 -04:00
comfyanonymous
517f4a94e4
Fix some lora loading slowdowns.
2024-08-12 11:50:32 -04:00
comfyanonymous
52a471c5c7
Change name of log.
2024-08-12 10:35:06 -04:00
comfyanonymous
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
comfyanonymous
9acfe4df41
Support loading directly to vram with CLIPLoader node.
2024-08-12 00:06:01 -04:00
comfyanonymous
9829b013ea
Fix mistake in last commit.
2024-08-12 00:00:17 -04:00
comfyanonymous
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
comfyanonymous
e9589d6d92
Add a way to set model dtype and ops from load_checkpoint_guess_config.
2024-08-11 08:50:34 -04:00
comfyanonymous
0d82a798a5
Remove the ckpt_path from load_state_dict_guess_config.
2024-08-11 08:37:35 -04:00
ljleb
925fff26fd
alternative to load_checkpoint_guess_config that accepts a loaded state dict ( #4249 )
...
* make alternative fn
* add back ckpt path as 2nd argument?
2024-08-11 08:36:52 -04:00
comfyanonymous
75b9b55b22
Fix issues with #4302 and support loading diffusers format flux.
2024-08-10 21:28:24 -04:00
Jaret Burkett
1765f1c60c
FLUX: Added full diffusers mapping for FLUX.1 schnell and dev. Adds full LoRA support from diffusers LoRAs. ( #4302 )
2024-08-10 21:26:41 -04:00
comfyanonymous
1de69fe4d5
Fix some issues with inference slowing down.
2024-08-10 16:21:25 -04:00
comfyanonymous
ae197f651b
Speed up hunyuan dit inference a bit.
2024-08-10 07:36:27 -04:00
comfyanonymous
1b5b8ca81a
Fix regression.
2024-08-09 21:45:21 -04:00
comfyanonymous
6678d5cf65
Fix regression.
2024-08-09 14:02:38 -04:00
TTPlanetPig
e172564eea
Update controlnet.py to fix the default controlnet weight as constant ( #4285 )
2024-08-09 13:40:05 -04:00
comfyanonymous
a3cc326748
Better fix for lowvram issue.
2024-08-09 12:16:25 -04:00
comfyanonymous
86a97e91fc
Fix controlnet regression.
2024-08-09 12:08:58 -04:00
comfyanonymous
5acdadc9f3
Fix issue with some lowvram weights.
2024-08-09 03:58:28 -04:00
comfyanonymous
55ad9d5f8c
Fix regression.
2024-08-09 03:36:40 -04:00
comfyanonymous
a9f04edc58
Implement text encoder part of HunyuanDiT loras.
2024-08-09 03:21:10 -04:00
comfyanonymous
a475ec2300
Cleanup HunyuanDit controlnets.
...
Use the: ControlNetApply SD3 and HunyuanDiT node.
2024-08-09 02:59:34 -04:00
来新璐
06eb9fb426
feat: add support for HunYuanDit ControlNet ( #4245 )
...
* add support for HunYuanDit ControlNet
* fix hunyuandit controlnet
* fix typo in hunyuandit controlnet
* fix typo in hunyuandit controlnet
* fix code format style
* add control_weight support for HunyuanDit Controlnet
* use control_weights in HunyuanDit Controlnet
* fix typo
2024-08-09 02:59:24 -04:00
comfyanonymous
413322645e
Raw torch is faster than einops?
2024-08-08 22:09:29 -04:00
comfyanonymous
11200de970
Cleaner code.
2024-08-08 20:07:09 -04:00
comfyanonymous
037c38eb0f
Try to improve inference speed on some machines.
2024-08-08 17:29:27 -04:00
comfyanonymous
1e11d2d1f5
Better prints.
2024-08-08 17:29:27 -04:00
comfyanonymous
66d4233210
Fix.
2024-08-08 15:16:51 -04:00
comfyanonymous
591010b7ef
Support diffusers text attention flux loras.
2024-08-08 14:45:52 -04:00
comfyanonymous
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
2024-08-07 15:08:39 -04:00
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
comfyanonymous
cb7c4b4be3
Workaround for lora OOM on lowvram mode.
2024-08-07 14:30:54 -04:00
comfyanonymous
1208863eca
Fix "Comfy" lora keys.
...
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
2024-08-07 13:49:31 -04:00