comfyanonymous
f40076096e
Cleanup some lumina te code.
2025-02-25 04:10:26 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master
2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master
2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
patientx
c15fe75f7b
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic" ,error fix
2025-02-22 15:44:20 +03:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master
2025-02-22 14:42:22 +03:00
comfyanonymous
aff16532d4
Remove some useless code.
2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master
2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master
2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs ( #6891 )
...
* Fix link pointing to non-exisiting docs
The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.
* Update node_typing.py
2025-02-20 07:07:07 -05:00
patientx
a813e39d54
Merge branch 'comfyanonymous:master' into master
2025-02-19 16:19:26 +03:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. ( #6877 )
...
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.
This addresses a complain in the comments of acc152b674 .
2025-02-19 07:14:45 -05:00
patientx
4e6a5fc548
Merge branch 'comfyanonymous:master' into master
2025-02-19 13:52:34 +03:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master
2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master
2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master
2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master
2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master
2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py
2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py
2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
4d66aa9709
Merge branch 'comfyanonymous:master' into master
2025-02-14 11:00:12 +03:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention
2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master
2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master
2025-02-12 15:50:23 +03:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
patientx
196fc385e1
Merge branch 'comfyanonymous:master' into master
2025-02-11 16:38:17 +03:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
patientx
2a0bc66fed
Merge branch 'comfyanonymous:master' into master
2025-02-10 15:41:15 +03:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
patientx
9561b03cc7
Merge branch 'comfyanonymous:master' into master
2025-02-09 15:33:03 +03:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
patientx
9a9da027b2
Merge branch 'comfyanonymous:master' into master
2025-02-07 12:02:36 +03:00
comfyanonymous
079eccc92a
Don't compress http response by default.
...
Remove argument to disable it.
Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
patientx
4a1e3ee925
Merge branch 'comfyanonymous:master' into master
2025-02-06 14:33:29 +03:00
comfyanonymous
14880e6dba
Remove some useless code.
2025-02-06 05:00:37 -05:00
patientx
93c0fc3446
Update supported_models.py
2025-02-05 23:12:38 +03:00
patientx
f8c2ab631a
Merge branch 'comfyanonymous:master' into master
2025-02-05 23:10:21 +03:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301
Upcasting rope to fp32 seems to make no difference in this model.
2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5
Use regular numbers for rope in lumina model.
2025-02-05 04:17:25 -05:00
patientx
059629a5fb
Merge branch 'comfyanonymous:master' into master
2025-02-05 10:15:14 +03:00
comfyanonymous
a57d635c5f
Fix lumina 2 batches.
2025-02-04 21:48:11 -05:00
patientx
523b5352b8
Merge branch 'comfyanonymous:master' into master
2025-02-04 18:16:21 +03:00
comfyanonymous
8ac2dddeed
Lower the default shift of lumina to reduce artifacts.
2025-02-04 06:50:37 -05:00
patientx
b8ab0f2091
Merge branch 'comfyanonymous:master' into master
2025-02-04 12:32:22 +03:00
comfyanonymous
3e880ac709
Fix on python 3.9
2025-02-04 04:20:56 -05:00
comfyanonymous
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
patientx
bf081a208a
Merge branch 'comfyanonymous:master' into master
2025-02-02 18:27:25 +03:00
comfyanonymous
44e19a28d3
Use maximum negative value instead of -inf for masks in text encoders.
...
This is probably more correct.
2025-02-02 09:46:00 -05:00
patientx
a5eb46e557
Merge branch 'comfyanonymous:master' into master
2025-02-02 17:30:16 +03:00
Dr.Lt.Data
0a0df5f136
better guide message for sageattention ( #6634 )
2025-02-02 09:26:47 -05:00
KarryCharon
24d6871e47
add disable-compres-response-body cli args; add compress middleware; ( #6672 )
2025-02-02 09:24:55 -05:00
patientx
2ccb7dd301
Merge branch 'comfyanonymous:master' into master
2025-02-01 15:33:58 +03:00
comfyanonymous
9e1d301129
Only use stable cascade lora format with cascade model.
2025-02-01 06:35:22 -05:00
patientx
97f5a7d844
Merge branch 'comfyanonymous:master' into master
2025-01-30 23:03:44 +03:00
comfyanonymous
8d8dc9a262
Allow batch of different sigmas when noise scaling.
2025-01-30 06:49:52 -05:00
patientx
98cf486504
Merge branch 'comfyanonymous:master' into master
2025-01-29 18:53:00 +03:00
filtered
222f48c0f2
Allow changing folder_paths.base_path via command line argument. ( #6600 )
...
* Reimpl. CLI arg directly inside folder_paths.
* Update tests to use CLI arg mocking.
* Revert last-minute refactor.
* Fix test state polution.
2025-01-29 08:06:28 -05:00
patientx
a5c944f482
Merge branch 'comfyanonymous:master' into master
2025-01-28 19:20:13 +03:00
comfyanonymous
13fd4d6e45
More friendly error messages for corrupted safetensors files.
2025-01-28 09:41:09 -05:00
patientx
f77fea7fc6
Merge branch 'comfyanonymous:master' into master
2025-01-27 23:45:19 +03:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
patientx
1442e34d9e
Merge branch 'comfyanonymous:master' into master
2025-01-26 15:27:37 +03:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
patientx
73433fffa0
Merge branch 'comfyanonymous:master' into master
2025-01-24 15:16:37 +03:00
comfyanonymous
14ca5f5a10
Remove useless code.
2025-01-24 06:15:54 -05:00
patientx
076d037620
Merge branch 'comfyanonymous:master' into master
2025-01-23 13:56:49 +03:00
comfyanonymous
96e2a45193
Remove useless code.
2025-01-23 05:56:23 -05:00
Chenlei Hu
dfa2b6d129
Remove unused function lcm in conds.py ( #6572 )
2025-01-23 05:54:09 -05:00
patientx
d9515843d0
Merge branch 'comfyanonymous:master' into master
2025-01-23 02:28:08 +03:00
comfyanonymous
d6bbe8c40f
Remove support for python 3.8.
2025-01-22 17:04:30 -05:00
patientx
fc09aa398c
Merge branch 'comfyanonymous:master' into master
2025-01-22 15:04:13 +03:00
chaObserv
e857dd48b8
Add gradient estimation sampler ( #6554 )
2025-01-22 05:29:40 -05:00
patientx
3402d28bcd
Merge branch 'comfyanonymous:master' into master
2025-01-21 00:06:28 +03:00
comfyanonymous
fb2ad645a3
Add FluxDisableGuidance node to disable using the guidance embed.
2025-01-20 14:50:24 -05:00
patientx
39e164a441
Merge branch 'comfyanonymous:master' into master
2025-01-20 19:27:59 +03:00
comfyanonymous
d8a7a32779
Cleanup old TODO.
2025-01-20 03:44:13 -05:00
patientx
df418fc3fd
added rocm and hip hiding for certain nodes getting errors under zluda
2025-01-19 17:49:55 +03:00
patientx
88eae2b0d1
Merge branch 'comfyanonymous:master' into master
2025-01-19 15:11:53 +03:00
Sergii Dymchenko
ebf038d4fa
Use torch.special.expm1 ( #6388 )
...
* Use `torch.special.expm1`
This function provides greater precision than `exp(x) - 1` for small values of `x`.
Found with TorchFix https://github.com/pytorch-labs/torchfix/
* Use non-alias
2025-01-19 04:54:32 -05:00
patientx
8ee302c9dd
Merge branch 'comfyanonymous:master' into master
2025-01-19 02:40:53 +03:00
catboxanon
b1a02131c9
Remove comfy.samplers self-import ( #6506 )
2025-01-18 17:49:51 -05:00
patientx
e3207a0560
Merge branch 'comfyanonymous:master' into master
2025-01-18 16:08:27 +03:00
comfyanonymous
507199d9a8
Uni pc sampler now works with audio and video models.
2025-01-18 05:27:58 -05:00
comfyanonymous
2f3ab40b62
Add warning when using old pytorch versions.
2025-01-17 18:47:27 -05:00
patientx
5388be0f56
Merge branch 'comfyanonymous:master' into master
2025-01-17 09:41:17 +03:00
comfyanonymous
0aa2368e46
Fix some cosmos fp8 issues.
2025-01-16 17:45:37 -05:00
comfyanonymous
cca96a85ae
Fix cosmos VAE failing with videos longer than 121 frames.
2025-01-16 16:30:06 -05:00
patientx
4afa79a368
Merge branch 'comfyanonymous:master' into master
2025-01-16 17:23:00 +03:00
comfyanonymous
31831e6ef1
Code refactor.
2025-01-16 07:23:54 -05:00
comfyanonymous
88ceb28e20
Tweak hunyuan memory usage factor.
2025-01-16 06:31:03 -05:00
patientx
ed13b68e4f
Merge branch 'comfyanonymous:master' into master
2025-01-16 13:59:32 +03:00
comfyanonymous
23289a6a5c
Clean up some debug lines.
2025-01-16 04:24:39 -05:00
comfyanonymous
9d8b6c1f46
More accurate memory estimation for cosmos and hunyuan video.
2025-01-16 03:48:40 -05:00
patientx
a779e34c5b
Merge branch 'comfyanonymous:master' into master
2025-01-16 11:29:26 +03:00
comfyanonymous
6320d05696
Slightly lower hunyuan video memory usage.
2025-01-16 00:23:01 -05:00
comfyanonymous
25683b5b02
Lower cosmos diffusion model memory usage.
2025-01-15 23:46:42 -05:00
comfyanonymous
4758fb64b9
Lower cosmos VAE memory usage by a bit.
2025-01-15 22:57:52 -05:00
comfyanonymous
008761166f
Optimize first attention block in cosmos VAE.
2025-01-15 21:48:46 -05:00
patientx
fb0d92b160
Merge branch 'comfyanonymous:master' into master
2025-01-15 23:33:05 +03:00
comfyanonymous
cba58fff0b
Remove unsafe embedding load for very old pytorch.
2025-01-15 04:32:23 -05:00
patientx
3ece3091d4
Merge branch 'comfyanonymous:master' into master
2025-01-15 11:55:19 +03:00
comfyanonymous
2feb8d0b77
Force safe loading of files in torch format on pytorch 2.4+
...
If this breaks something for you make an issue.
2025-01-15 03:50:27 -05:00
patientx
104e6a9685
Merge branch 'comfyanonymous:master' into master
2025-01-15 03:59:33 +03:00
Pam
c78a45685d
Rewrite res_multistep sampler and implement res_multistep_cfg_pp sampler. ( #6462 )
2025-01-14 18:20:06 -05:00
patientx
4f01f72bed
Update zluda.py
2025-01-14 20:03:55 +03:00
patientx
c4861c74d4
UPDATED ZLUDA PATCHING METHOD
2025-01-14 19:57:22 +03:00
patientx
c3fc894ce2
Add files via upload
2025-01-14 19:54:44 +03:00
patientx
c7ebd121d6
Merge branch 'comfyanonymous:master' into master
2025-01-14 15:50:05 +03:00
comfyanonymous
3aaabb12d4
Implement Cosmos Image/Video to World (Video) diffusion models.
...
Use CosmosImageToVideoLatent to set the input image/video.
2025-01-14 05:14:10 -05:00
patientx
aa2a83ec35
Merge branch 'comfyanonymous:master' into master
2025-01-13 14:46:19 +03:00
comfyanonymous
1f1c7b7b56
Remove useless code.
2025-01-13 03:52:37 -05:00
patientx
b03621d13b
Merge branch 'comfyanonymous:master' into master
2025-01-12 14:31:19 +03:00
comfyanonymous
90f349f93d
Add res_multistep sampler from the cosmos code.
...
This sampler should work with all models.
2025-01-12 03:10:07 -05:00
patientx
d319705d78
Merge branch 'comfyanonymous:master' into master
2025-01-12 00:39:02 +03:00
Jedrzej Kosinski
6c9bd11fa3
Hooks Part 2 - TransformerOptionsHook and AdditionalModelsHook ( #6377 )
...
* Add 'sigmas' to transformer_options so that downstream code can know about the full scope of current sampling run, fix Hook Keyframes' guarantee_steps=1 inconsistent behavior with sampling split across different Sampling nodes/sampling runs by referencing 'sigmas'
* Cleaned up hooks.py, refactored Hook.should_register and add_hook_patches to use target_dict instead of target so that more information can be provided about the current execution environment if needed
* Refactor WrapperHook into TransformerOptionsHook, as there is no need to separate out Wrappers/Callbacks/Patches into different hook types (all affect transformer_options)
* Refactored HookGroup to also store a dictionary of hooks separated by hook_type, modified necessary code to no longer need to manually separate out hooks by hook_type
* In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_options to not conflict with the "sigmas" that will overwrite "sigmas" in _calc_cond_batch
* Refactored 'registered' to be HookGroup instead of a list of Hooks, made AddModelsHook operational and compliant with should_register result, moved TransformerOptionsHook handling out of ModelPatcher.register_all_hook_patches, support patches in TransformerOptionsHook properly by casting any patches/wrappers/hooks to proper device at sample time
* Made hook clone code sane, made clear ObjectPatchHook and SetInjectionsHook are not yet operational
* Fix performance of hooks when hooks are appended via Cond Pair Set Props nodes by properly caching between positive and negative conds, make hook_patches_backup behave as intended (in the case that something pre-registers WeightHooks on the ModelPatcher instead of registering it at sample time)
* Filter only registered hooks on self.conds in CFGGuider.sample
* Make hook_scope functional for TransformerOptionsHook
* removed 4 whitespace lines to satisfy Ruff,
* Add a get_injections function to ModelPatcher
* Made TransformerOptionsHook contribute to registered hooks properly, added some doc strings and removed a so-far unused variable
* Rename AddModelsHooks to AdditionalModelsHook, rename SetInjectionsHook to InjectionsHook (not yet implemented, but at least getting the naming figured out)
* Clean up a typehint
2025-01-11 12:20:23 -05:00
patientx
8ac79a7563
Merge branch 'comfyanonymous:master' into master
2025-01-11 15:04:10 +03:00
comfyanonymous
ee8a7ab69d
Fast latent preview for Cosmos.
2025-01-11 04:41:24 -05:00
patientx
7d45042e2e
Merge branch 'comfyanonymous:master' into master
2025-01-10 18:31:48 +03:00
comfyanonymous
2ff3104f70
WIP support for Nvidia Cosmos 7B and 14B text to world (video) models.
2025-01-10 09:14:16 -05:00
patientx
00cf1206e8
Merge branch 'comfyanonymous:master' into master
2025-01-10 15:44:49 +03:00
comfyanonymous
129d8908f7
Add argument to skip the output reshaping in the attention functions.
2025-01-10 06:27:37 -05:00
patientx
c37b5ccf29
Merge branch 'comfyanonymous:master' into master
2025-01-09 18:15:43 +03:00
comfyanonymous
ff838657fa
Cleaner handling of attention mask in ltxv model code.
2025-01-09 07:12:03 -05:00
patientx
0b933e1634
Merge branch 'comfyanonymous:master' into master
2025-01-09 12:57:31 +03:00
comfyanonymous
2307ff6746
Improve some logging messages.
2025-01-08 19:05:22 -05:00
patientx
867659b035
Merge branch 'comfyanonymous:master' into master
2025-01-08 01:49:22 +03:00
comfyanonymous
d0f3752e33
Properly calculate inner dim for t5 model.
...
This is required to support some different types of t5 models.
2025-01-07 17:33:03 -05:00
patientx
e1aa83d068
Merge branch 'comfyanonymous:master' into master
2025-01-07 13:57:13 +03:00
comfyanonymous
4209edf48d
Make a few more samplers deterministic.
2025-01-07 02:12:32 -05:00
patientx
58cfa6b7f3
Merge branch 'comfyanonymous:master' into master
2025-01-07 09:25:41 +03:00
Chenlei Hu
d055325783
Document get_attr and get_model_object ( #6357 )
...
* Document get_attr and get_model_object
* Update model_patcher.py
* Update model_patcher.py
* Update model_patcher.py
2025-01-06 20:12:22 -05:00
patientx
6c1e37c091
Merge branch 'comfyanonymous:master' into master
2025-01-06 14:50:48 +03:00
comfyanonymous
916d1e14a9
Make ancestral samplers more deterministic.
2025-01-06 03:04:32 -05:00
Jedrzej Kosinski
c496e53519
In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_options to not conflict with the "sigmas" that will overwrite "sigmas" in _calc_cond_batch ( #6360 )
2025-01-06 01:36:47 -05:00
patientx
a5a09e45dd
Merge branch 'comfyanonymous:master' into master
2025-01-04 15:42:12 +03:00
comfyanonymous
d45ebb63f6
Remove old unused function.
2025-01-04 07:20:54 -05:00
patientx
c523c36aef
Merge branch 'comfyanonymous:master' into master
2025-01-02 19:55:49 +03:00
comfyanonymous
9e9c8a1c64
Clear cache as often on AMD as Nvidia.
...
I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
patientx
bfc4fb0efb
Merge branch 'comfyanonymous:master' into master
2025-01-02 00:37:45 +03:00
Andrew Kvochko
0f11d60afb
Fix temporal tiling for decoder, remove redundant tiles. ( #6306 )
...
This commit fixes the temporal tile size calculation, and removes
a redundant tile at the end of the range when its elements are
completely covered by the previous tile.
Co-authored-by: Andrew Kvochko <a.kvochko@lightricks.com>
2025-01-01 16:29:01 -05:00
patientx
0dbf8238af
Merge branch 'comfyanonymous:master' into master
2025-01-01 15:55:00 +03:00
comfyanonymous
79eea51a1d
Fix and enforce all ruff W rules.
2025-01-01 03:08:33 -05:00
patientx
5c8d73f4b4
Merge branch 'comfyanonymous:master' into master
2025-01-01 02:14:54 +03:00
blepping
c0338a46a4
Fix unknown sampler error handling in calculate_sigmas function ( #6280 )
...
Modernize calculate_sigmas function
2024-12-31 17:33:50 -05:00
patientx
5e9aa9bb9e
Merge branch 'comfyanonymous:master' into master
2024-12-31 23:34:57 +03:00
Jedrzej Kosinski
1c99734e5a
Add missing model_options param ( #6296 )
2024-12-31 14:46:55 -05:00
patientx
419df8d958
Merge branch 'comfyanonymous:master' into master
2024-12-31 12:22:04 +03:00
filtered
67758f50f3
Fix custom node type-hinting examples ( #6281 )
...
* Fix import in comfy_types doc / sample
* Clarify docstring
2024-12-31 03:41:09 -05:00
comfyanonymous
b7572b2f87
Fix and enforce no trailing whitespace.
2024-12-31 03:16:37 -05:00
patientx
cbcd4aa616
Merge branch 'comfyanonymous:master' into master
2024-12-30 15:02:02 +03:00
blepping
a90aafafc1
Add kl_optimal scheduler ( #6206 )
...
* Add kl_optimal scheduler
* Rename kl_optimal_schedule to kl_optimal_scheduler to be more consistent
2024-12-30 05:09:38 -05:00
patientx
349ddc0b92
Merge branch 'comfyanonymous:master' into master
2024-12-30 12:21:28 +03:00
comfyanonymous
d9b7cfac7e
Fix and enforce new lines at the end of files.
2024-12-30 04:14:59 -05:00
patientx
cd869622df
Merge branch 'comfyanonymous:master' into master
2024-12-30 11:54:26 +03:00
Jedrzej Kosinski
3507870535
Add 'sigmas' to transformer_options so that downstream code can know about the full scope of current sampling run, fix Hook Keyframes' guarantee_steps=1 inconsistent behavior with sampling split across different Sampling nodes/sampling runs by referencing 'sigmas' ( #6273 )
2024-12-30 03:42:49 -05:00
patientx
cd502aa403
Merge branch 'comfyanonymous:master' into master
2024-12-29 13:09:24 +03:00
comfyanonymous
a618f768e0
Auto reshape 2d to 3d latent for single image generation on video model.
2024-12-29 02:26:49 -05:00
patientx
9f71405928
Merge branch 'comfyanonymous:master' into master
2024-12-28 14:38:54 +03:00
comfyanonymous
b504bd606d
Add ruff rule for empty line with trailing whitespace.
2024-12-28 05:23:08 -05:00
patientx
b7c91dd68c
Merge branch 'comfyanonymous:master' into master
2024-12-28 12:02:44 +03:00
comfyanonymous
d170292594
Remove some trailing white space.
2024-12-27 18:02:30 -05:00
filtered
9cfd185676
Add option to log non-error output to stdout ( #6243 )
...
* nit
* Add option to log non-error output to stdout
- No change to default behaviour
- Adds CLI argument: --log-stdout
- With this arg present, any logging of a level below logging.ERROR will be sent to stdout instead of stderr
2024-12-27 14:40:05 -05:00
patientx
b2a9683d75
Merge branch 'comfyanonymous:master' into master
2024-12-27 15:38:26 +03:00
comfyanonymous
4b5bcd8ac4
Closer memory estimation for hunyuan dit model.
2024-12-27 07:37:00 -05:00
comfyanonymous
ceb50b2cbf
Closer memory estimation for pixart models.
2024-12-27 07:30:09 -05:00
patientx
4590f75633
Merge branch 'comfyanonymous:master' into master
2024-12-27 09:59:51 +03:00
comfyanonymous
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
patientx
1f9acbbfca
Merge branch 'comfyanonymous:master' into master
2024-12-26 23:22:48 +03:00
comfyanonymous
ee9547ba31
Improve temporal VAE Encode (Tiled) math.
2024-12-26 07:18:49 -05:00
patientx
49fa16cc7a
Merge branch 'comfyanonymous:master' into master
2024-12-25 14:05:18 +03:00
comfyanonymous
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
comfyanonymous
99a1fb6027
Make fast fp8 take a bit less peak memory.
2024-12-24 18:05:19 -05:00
patientx
077ebf7b17
Merge branch 'comfyanonymous:master' into master
2024-12-24 16:53:56 +03:00
comfyanonymous
73e04987f7
Prevent black images in VAE Decode (Tiled) node.
...
Overlap should be minimum 1 with tiling 2 for tiled temporal VAE decoding.
2024-12-24 07:36:30 -05:00
comfyanonymous
5388df784a
Add temporal tiling to VAE Encode (Tiled) node.
2024-12-24 07:10:09 -05:00
patientx
0f7b4f063d
Merge branch 'comfyanonymous:master' into master
2024-12-24 15:06:02 +03:00
comfyanonymous
bc6dac4327
Add temporal tiling to VAE Decode (Tiled) node.
...
You can now do tiled VAE decoding on the temporal direction for videos.
2024-12-23 20:03:37 -05:00
patientx
00afa8b34f
Merge branch 'comfyanonymous:master' into master
2024-12-23 11:36:49 +03:00
comfyanonymous
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
patientx
403a081215
Merge branch 'comfyanonymous:master' into master
2024-12-23 10:33:06 +03:00
comfyanonymous
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
comfyanonymous
56bc64f351
Comment out some useless code.
2024-12-22 23:51:14 -05:00
zhangp365
f7d83b72e0
fixed a bug in ldm/pixart/blocks.py ( #6158 )
2024-12-22 23:44:20 -05:00
comfyanonymous
80f07952d2
Fix lowvram issue with ltxv vae.
2024-12-22 23:20:17 -05:00
patientx
757335d901
Update supported_models.py
2024-12-23 02:54:49 +03:00
patientx
713eca2176
Update supported_models.py
2024-12-23 02:32:50 +03:00
patientx
e9d8cad2f0
Merge branch 'comfyanonymous:master' into master
2024-12-22 16:56:29 +03:00
comfyanonymous
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
patientx
88ae56dcf9
Merge branch 'comfyanonymous:master' into master
2024-12-21 15:52:28 +03:00
comfyanonymous
da13b6b827
Get rid of meshgrid warning.
2024-12-20 18:02:12 -05:00
comfyanonymous
c86cd58573
Remove useless code.
2024-12-20 17:50:03 -05:00
comfyanonymous
b5fe39211a
Remove some useless code.
2024-12-20 17:43:50 -05:00
patientx
4d64ade41f
Merge branch 'comfyanonymous:master' into master
2024-12-21 01:30:32 +03:00
comfyanonymous
e946667216
Some fixes/cleanups to pixart code.
...
Commented out the masking related code because it is never used in this
implementation.
2024-12-20 17:10:52 -05:00
patientx
37fc9a3ff2
Merge branch 'comfyanonymous:master' into master
2024-12-21 00:37:40 +03:00
Chenlei Hu
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
City
bddb02660c
Add PixArt model support ( #6055 )
...
* PixArt initial version
* PixArt Diffusers convert logic
* pos_emb and interpolation logic
* Reduce duplicate code
* Formatting
* Use optimized attention
* Edit empty token logic
* Basic PixArt LoRA support
* Fix aspect ratio logic
* PixArtAlpha text encode with conds
* Use same detection key logic for PixArt diffusers
2024-12-20 15:25:00 -05:00
patientx
07ea41ecc1
Merge branch 'comfyanonymous:master' into master
2024-12-20 13:06:18 +03:00
comfyanonymous
418eb7062d
Support new LTXV VAE.
2024-12-20 04:38:29 -05:00
patientx
ebf13dfe56
Merge branch 'comfyanonymous:master' into master
2024-12-20 10:05:56 +03:00
comfyanonymous
cac68ca813
Fix some more video tiled encode issues.
...
The downscale_ratio formula for the temporal had issues with some frame
numbers.
2024-12-19 23:14:03 -05:00
comfyanonymous
52c1d933b2
Fix tiled hunyuan video VAE encode issue.
...
Some shapes like 1024x1024 with tile_size 256 and overlap 64 had issues.
2024-12-19 22:55:15 -05:00
comfyanonymous
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
patientx
778005af3d
Merge branch 'comfyanonymous:master' into master
2024-12-19 14:51:33 +03:00
comfyanonymous
c441048a4f
Make VAE Encode tiled node work with video VAE.
2024-12-19 05:31:39 -05:00
comfyanonymous
9f4b181ab3
Add fast previews for hunyuan video.
2024-12-18 18:24:23 -05:00
comfyanonymous
cbbf077593
Small optimizations.
2024-12-18 18:23:28 -05:00
patientx
43a0204b07
Merge branch 'comfyanonymous:master' into master
2024-12-18 15:17:15 +03:00
comfyanonymous
ff2ff02168
Support old diffusion-pipe hunyuan video loras.
2024-12-18 06:23:54 -05:00
patientx
947aba46c3
Merge branch 'comfyanonymous:master' into master
2024-12-18 12:33:32 +03:00
comfyanonymous
4c5c4ddeda
Fix regression in VAE code on old pytorch versions.
2024-12-18 03:08:28 -05:00
patientx
c062723ca5
Merge branch 'comfyanonymous:master' into master
2024-12-18 10:16:43 +03:00
comfyanonymous
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
comfyanonymous
a4f59bc65e
Pick attention implementation based on device in llama code.
2024-12-18 01:30:20 -05:00
patientx
0e5fa013b2
Merge branch 'comfyanonymous:master' into master
2024-12-18 00:43:24 +03:00
comfyanonymous
ca457f7ba1
Properly tokenize the template for hunyuan video.
2024-12-17 16:22:02 -05:00
comfyanonymous
cd6f615038
Fix tiled vae not working with some shapes.
2024-12-17 16:22:02 -05:00
patientx
4ace4e9ecb
Merge branch 'comfyanonymous:master' into master
2024-12-17 19:55:56 +03:00
comfyanonymous
e4e1bff605
Support diffusion-pipe hunyuan video lora format.
2024-12-17 07:14:21 -05:00
patientx
dc574cdc47
Merge branch 'comfyanonymous:master' into master
2024-12-17 13:57:30 +03:00
comfyanonymous
d6656b0c0c
Support llama hunyuan video text encoder in scaled fp8 format.
2024-12-17 04:19:22 -05:00
comfyanonymous
f4cdedea62
Fix regression with ltxv VAE.
2024-12-17 02:17:31 -05:00
comfyanonymous
39b1fc4ccc
Adjust used dtypes for hunyuan video VAE and diffusion model.
2024-12-16 23:31:10 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
comfyanonymous
19ee5d9d8b
Don't expand mask when not necessary.
...
Expanding seems to slow down inference.
2024-12-16 18:22:50 -05:00
Raphael Walker
61b50720d0
Add support for attention masking in Flux ( #5942 )
...
* fix attention OOM in xformers
* allow passing attention mask in flux attention
* allow an attn_mask in flux
* attn masks can be done using replace patches instead of a separate dict
* fix return types
* fix return order
* enumerate
* patch the right keys
* arg names
* fix a silly bug
* fix xformers masks
* replace match with if, elif, else
* mask with image_ref_size
* remove unused import
* remove unused import 2
* fix pytorch/xformers attention
This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.
* fix mask shapes
2024-12-16 18:21:17 -05:00
patientx
9704e3e617
Merge branch 'comfyanonymous:master' into master
2024-12-14 14:24:39 +03:00
comfyanonymous
e83063bf24
Support conv3d in PatchEmbed.
2024-12-14 05:46:04 -05:00
patientx
fd2eeb5e30
Merge branch 'comfyanonymous:master' into master
2024-12-13 16:09:33 +03:00
comfyanonymous
4e14032c02
Make pad_to_patch_size function work on multi dim.
2024-12-13 07:22:05 -05:00
patientx
3218ed8559
Merge branch 'comfyanonymous:master' into master
2024-12-13 11:21:47 +03:00
Chenlei Hu
563291ee51
Enforce all pyflake lint rules ( #6033 )
...
* Enforce F821 undefined-name
* Enforce all pyflake lint rules
2024-12-12 19:29:37 -05:00
Chenlei Hu
2cddbf0821
Lint and fix undefined names (1/N) ( #6028 )
2024-12-12 18:55:26 -05:00
Chenlei Hu
60749f345d
Lint and fix undefined names (3/N) ( #6030 )
2024-12-12 18:49:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
patientx
5d059779d3
Merge branch 'comfyanonymous:master' into master
2024-12-12 15:26:42 +03:00
comfyanonymous
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
patientx
0759f96414
Merge branch 'comfyanonymous:master' into master
2024-12-11 23:05:47 +03:00
comfyanonymous
7a7efe8424
Support loading some checkpoint files with nested dicts.
2024-12-11 08:04:54 -05:00
patientx
3b1247c8af
Merge branch 'comfyanonymous:master' into master
2024-12-11 12:14:46 +03:00
comfyanonymous
44db978531
Fix a few things in text enc code for models with no eos token.
2024-12-10 23:07:26 -05:00
patientx
3254013ec2
Merge branch 'comfyanonymous:master' into master
2024-12-11 00:42:55 +03:00
comfyanonymous
1c8d11e48a
Support different types of tokenizers.
...
Support tokenizers without an eos token.
Pass full sentences to tokenizer for more efficient tokenizing.
2024-12-10 15:03:39 -05:00
patientx
be4b0b5515
Merge branch 'comfyanonymous:master' into master
2024-12-10 15:06:50 +03:00
catboxanon
23827ca312
Add cond_scale to sampler_post_cfg_function ( #5985 )
2024-12-09 20:13:18 -05:00
patientx
7788049e2e
Merge branch 'comfyanonymous:master' into master
2024-12-10 01:10:54 +03:00
Chenlei Hu
0fd4e6c778
Lint unused import ( #5973 )
...
* Lint unused import
* nit
* Remove unused imports
* revert fix_torch import
* nit
2024-12-09 15:24:39 -05:00
comfyanonymous
e2fafe0686
Make CLIP set last layer node work with t5 models.
2024-12-09 03:57:14 -05:00
Haoming
fbf68c4e52
clamp input ( #5928 )
2024-12-07 14:00:31 -05:00
patientx
025d9ed896
Merge branch 'comfyanonymous:master' into master
2024-12-07 00:34:25 +03:00
comfyanonymous
8af9a91e0c
A few improvements to #5937 .
2024-12-06 05:49:15 -05:00
Michael Kupchick
005d2d3a13
ltxv: add noise to guidance image to ensure generated motion. ( #5937 )
2024-12-06 05:46:08 -05:00
comfyanonymous
1e21f4c14e
Make timestep ranges more usable on rectified flow models.
...
This breaks some old workflows but should make the nodes actually useful.
2024-12-05 16:40:58 -05:00
patientx
d35238f60a
Merge branch 'comfyanonymous:master' into master
2024-12-04 23:52:16 +03:00
Chenlei Hu
48272448ad
[Developer Experience] Add node typing ( #5676 )
...
* [Developer Experience] Add node typing
* Shim StrEnum
* nit
* nit
* nit
2024-12-04 15:01:00 -05:00
patientx
743d281f78
Merge branch 'comfyanonymous:master' into master
2024-12-03 23:53:57 +03:00
comfyanonymous
452179fe4f
Make ModelPatcher class clone function work with inheritance.
2024-12-03 13:57:57 -05:00
patientx
b826d3e8c2
Merge branch 'comfyanonymous:master' into master
2024-12-03 14:51:59 +03:00
comfyanonymous
c1b92b719d
Some optimizations to euler a.
2024-12-03 06:11:52 -05:00
comfyanonymous
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
patientx
1d7cbcdcb2
Update model_management.py
2024-12-03 01:35:06 +03:00
patientx
9dea868c65
Reapply "Merge branch 'comfyanonymous:master' into master"
...
This reverts commit f3968d1611 .
2024-12-03 00:45:31 +03:00
patientx
f3968d1611
Revert "Merge branch 'comfyanonymous:master' into master"
...
This reverts commit 605425bdd6 , reversing
changes made to 74e6ad95f7 .
2024-12-03 00:10:22 +03:00
patientx
605425bdd6
Merge branch 'comfyanonymous:master' into master
2024-12-02 23:10:52 +03:00
Jedrzej Kosinski
0ee322ec5f
ModelPatcher Overhaul and Hook Support ( #5583 )
...
* Added hook_patches to ModelPatcher for weights (model)
* Initial changes to calc_cond_batch to eventually support hook_patches
* Added current_patcher property to BaseModel
* Consolidated add_hook_patches_as_diffs into add_hook_patches func, fixed fp8 support for model-as-lora feature
* Added call to initialize_timesteps on hooks in process_conds func, and added call prepare current keyframe on hooks in calc_cond_batch
* Added default_conds support in calc_cond_batch func
* Added initial set of hook-related nodes, added code to register hooks for loras/model-as-loras, small renaming/refactoring
* Made CLIP work with hook patches
* Added initial hook scheduling nodes, small renaming/refactoring
* Fixed MaxSpeed and default conds implementations
* Added support for adding weight hooks that aren't registered on the ModelPatcher at sampling time
* Made Set Clip Hooks node work with hooks from Create Hook nodes, began work on better Create Hook Model As LoRA node
* Initial work on adding 'model_as_lora' lora type to calculate_weight
* Continued work on simpler Create Hook Model As LoRA node, started to implement ModelPatcher callbacks, attachments, and additional_models
* Fix incorrect ref to create_hook_patches_clone after moving function
* Added injections support to ModelPatcher + necessary bookkeeping, added additional_models support in ModelPatcher, conds, and hooks
* Added wrappers to ModelPatcher to facilitate standardized function wrapping
* Started scaffolding for other hook types, refactored get_hooks_from_cond to organize hooks by type
* Fix skip_until_exit logic bug breaking injection after first run of model
* Updated clone_has_same_weights function to account for new ModelPatcher properties, improved AutoPatcherEjector usage in partially_load
* Added WrapperExecutor for non-classbound functions, added calc_cond_batch wrappers
* Refactored callbacks+wrappers to allow storing lists by id
* Added forward_timestep_embed_patch type, added helper functions on ModelPatcher for emb_patch and forward_timestep_embed_patch, added helper functions for removing callbacks/wrappers/additional_models by key, added custom_should_register prop to hooks
* Added get_attachment func on ModelPatcher
* Implement basic MemoryCounter system for determing with cached weights due to hooks should be offloaded in hooks_backup
* Modified ControlNet/T2IAdapter get_control function to receive transformer_options as additional parameter, made the model_options stored in extra_args in inner_sample be a clone of the original model_options instead of same ref
* Added create_model_options_clone func, modified type annotations to use __future__ so that I can use the better type annotations
* Refactored WrapperExecutor code to remove need for WrapperClassExecutor (now gone), added sampler.sample wrapper (pending review, will likely keep but will see what hacks this could currently let me get rid of in ACN/ADE)
* Added Combine versions of Cond/Cond Pair Set Props nodes, renamed Pair Cond to Cond Pair, fixed default conds never applying hooks (due to hooks key typo)
* Renamed Create Hook Model As LoRA nodes to make the test node the main one (more changes pending)
* Added uuid to conds in CFGGuider and uuids to transformer_options to allow uniquely identifying conds in batches during sampling
* Fixed models not being unloaded properly due to current_patcher reference; the current ComfyUI model cleanup code requires that nothing else has a reference to the ModelPatcher instances
* Fixed default conds not respecting hook keyframes, made keyframes not reset cache when strength is unchanged, fixed Cond Set Default Combine throwing error, fixed model-as-lora throwing error during calculate_weight after a recent ComfyUI update, small refactoring/scaffolding changes for hooks
* Changed CreateHookModelAsLoraTest to be the new CreateHookModelAsLora, rename old ones as 'direct' and will be removed prior to merge
* Added initial support within CLIP Text Encode (Prompt) node for scheduling weight hook CLIP strength via clip_start_percent/clip_end_percent on conds, added schedule_clip toggle to Set CLIP Hooks node, small cleanup/fixes
* Fix range check in get_hooks_for_clip_schedule so that proper keyframes get assigned to corresponding ranges
* Optimized CLIP hook scheduling to treat same strength as same keyframe
* Less fragile memory management.
* Make encode_from_tokens_scheduled call cleaner, rollback change in model_patcher.py for hook_patches_backup dict
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Moved WrappersMP/CallbacksMP/WrapperExecutor to patcher_extension.py
* Refactored code to store wrappers and callbacks in transformer_options, added apply_model and diffusion_model.forward wrappers
* Fix issue.
* Refactored hooks in calc_cond_batch to be part of get_area_and_mult tuple, added extra_hooks to ControlBase to allow custom controlnets w/ hooks, small cleanup and renaming
* Fixed inconsistency of results when schedule_clip is set to False, small renaming/typo fixing, added initial support for ControlNet extra_hooks to work in tandem with normal cond hooks, initial work on calc_cond_batch merging all subdicts in returned transformer_options
* Modified callbacks and wrappers so that unregistered types can be used, allowing custom_nodes to have their own unique callbacks/wrappers if desired
* Updated different hook types to reflect actual progress of implementation, initial scaffolding for working WrapperHook functionality
* Fixed existing weight hook_patches (pre-registered) not working properly for CLIP
* Removed Register/Direct hook nodes since they were present only for testing, removed diff-related weight hook calculation as improved_memory removes unload_model_clones and using sample time registered hooks is less hacky
* Added clip scheduling support to all other native ComfyUI text encoding nodes (sdxl, flux, hunyuan, sd3)
* Made WrapperHook functional, added another wrapper/callback getter, added ON_DETACH callback to ModelPatcher
* Made opt_hooks append by default instead of replace, renamed comfy.hooks set functions to be more accurate
* Added apply_to_conds to Set CLIP Hooks, modified relevant code to allow text encoding to automatically apply hooks to output conds when apply_to_conds is set to True
* Fix cached_hook_patches not respecting target_device/memory_counter results
* Fixed issue with setting weights from hooks instead of copying them, added additional memory_counter check when caching hook patches
* Remove unnecessary torch.no_grad calls for hook patches
* Increased MemoryCounter minimum memory to leave free by *2 until a better way to get inference memory estimate of currently loaded models exists
* For encode_from_tokens_scheduled, allow start_percent and end_percent in add_dict to limit which scheduled conds get encoded for optimization purposes
* Removed a .to call on results of calculate_weight in patch_hook_weight_to_device that was screwing up the intermediate results for fp8 prior to being passed into stochastic_rounding call
* Made encode_from_tokens_scheduled work when no hooks are set on patcher
* Small cleanup of comments
* Turn off hook patch caching when only 1 hook present in sampling, replace some current_hook = None with calls to self.patch_hooks(None) instead to avoid a potential edge case
* On Cond/Cond Pair nodes, removed opt_ prefix from optional inputs
* Allow both FLOATS and FLOAT for floats_strength input
* Revert change, does not work
* Made patch_hook_weight_to_device respect set_func and convert_func
* Make discard_model_sampling True by default
* Add changes manually from 'master' so merge conflict resolution goes more smoothly
* Cleaned up text encode nodes with just a single clip.encode_from_tokens_scheduled call
* Make sure encode_from_tokens_scheduled will respect use_clip_schedule on clip
* Made nodes in nodes_hooks be marked as experimental (beta)
* Add get_nested_additional_models for cases where additional_models could have their own additional_models, and add robustness for circular additional_models references
* Made finalize_default_conds area math consistent with other sampling code
* Changed 'opt_hooks' input of Cond/Cond Pair Set Default Combine nodes to 'hooks'
* Remove a couple old TODO's and a no longer necessary workaround
2024-12-02 14:51:02 -05:00
comfyanonymous
79d5ceae6e
Improved memory management. ( #5450 )
...
* Less fragile memory management.
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Fix issue.
2024-12-02 14:39:34 -05:00
comfyanonymous
2d5b3e0078
Remove useless code.
2024-12-02 06:49:55 -05:00
patientx
74e6ad95f7
Merge branch 'comfyanonymous:master' into master
2024-12-01 18:49:39 +03:00
comfyanonymous
8e4118c0de
make dpm_2_ancestral work with rectified flow.
2024-12-01 07:37:41 -05:00
patientx
47c9391ea0
Merge branch 'comfyanonymous:master' into master
2024-11-29 13:34:26 +03:00
comfyanonymous
26fb2c68e8
Add a way to disable cropping in the CLIPVisionEncode node.
2024-11-28 20:24:47 -05:00
patientx
614889a36b
Merge branch 'comfyanonymous:master' into master
2024-11-28 16:00:16 +03:00
comfyanonymous
bf2650a80e
Fast previews for ltxv.
2024-11-28 06:46:15 -05:00
patientx
b30020ae6b
Merge branch 'comfyanonymous:master' into master
2024-11-28 14:19:31 +03:00
comfyanonymous
b666539595
Remove print.
2024-11-27 20:28:39 -05:00
patientx
79029da19e
Merge branch 'comfyanonymous:master' into master
2024-11-28 01:59:08 +03:00
comfyanonymous
95d8713482
Missing parentheses.
2024-11-27 13:45:32 -05:00
patientx
234ec9aa01
Merge branch 'comfyanonymous:master' into master
2024-11-27 02:09:20 +03:00
comfyanonymous
497db6212f
Alternative fix for #5767
2024-11-26 17:53:04 -05:00
patientx
8f848c9e8b
Merge branch 'comfyanonymous:master' into master
2024-11-26 23:27:29 +03:00
comfyanonymous
4c82741b54
Support official SD3.5 Controlnets.
2024-11-26 11:31:25 -05:00
patientx
9fe09c02c2
Merge branch 'comfyanonymous:master' into master
2024-11-26 11:50:33 +03:00
comfyanonymous
15c39ea757
Support for the official mochi lora format.
2024-11-26 03:34:36 -05:00
comfyanonymous
b7143b74ce
Flux inpaint model does not work in fp16.
2024-11-26 01:33:01 -05:00
patientx
fc8f411fa8
Merge branch 'comfyanonymous:master' into master
2024-11-25 13:31:28 +03:00
comfyanonymous
61196d8857
Add option to inference the diffusion model in fp32 and fp64.
2024-11-25 05:00:23 -05:00
patientx
2ab6c68377
Merge branch 'comfyanonymous:master' into master
2024-11-24 14:53:09 +03:00
comfyanonymous
b4526d3fc3
Skip layer guidance now works on hydit model.
2024-11-24 05:54:30 -05:00
patientx
3479eb5cb2
Merge branch 'comfyanonymous:master' into master
2024-11-23 22:20:28 +03:00
comfyanonymous
ab885b33ba
Skip layer guidance node now works on LTX-Video.
2024-11-23 10:33:05 -05:00
patientx
63b7bfe64d
Merge branch 'comfyanonymous:master' into master
2024-11-23 13:22:15 +03:00
comfyanonymous
839ed3368e
Some improvements to the lowvram unloading.
2024-11-22 20:59:15 -05:00
patientx
d877db2b77
Merge branch 'comfyanonymous:master' into master
2024-11-23 02:50:32 +03:00
comfyanonymous
6e8cdcd3cb
Fix some tiled VAE decoding issues with LTX-Video.
2024-11-22 18:00:34 -05:00
comfyanonymous
e5c3f4b87f
LTXV lowvram fixes.
2024-11-22 17:17:11 -05:00
comfyanonymous
bc6be6c11e
Some fixes to the lowvram system.
2024-11-22 16:40:04 -05:00
comfyanonymous
5818f6cf51
Remove print.
2024-11-22 10:49:15 -05:00
patientx
396d7cd9d0
Merge branch 'comfyanonymous:master' into master
2024-11-22 18:14:14 +03:00
comfyanonymous
5e16f1d24b
Support Lightricks LTX-Video model.
2024-11-22 08:46:39 -05:00
patientx
7720610a81
Merge branch 'comfyanonymous:master' into master
2024-11-22 10:25:27 +03:00
comfyanonymous
2fd9c1308a
Fix mask issue in some attention functions.
2024-11-22 02:10:09 -05:00
patientx
3652374a46
Merge branch 'comfyanonymous:master' into master
2024-11-22 01:51:35 +03:00
comfyanonymous
8f0009aad0
Support new flux model variants.
2024-11-21 08:38:23 -05:00
patientx
0ec91952c2
Merge branch 'comfyanonymous:master' into master
2024-11-21 15:51:39 +03:00
comfyanonymous
41444b5236
Add some new weight patching functionality.
...
Add a way to reshape lora weights.
Allow weight patches to all weight not just .weight and .bias
Add a way for a lora to set a weight to a specific value.
2024-11-21 07:19:17 -05:00
patientx
16a71e8511
Merge branch 'comfyanonymous:master' into master
2024-11-21 10:44:02 +03:00
comfyanonymous
07f6eeaa13
Fix mask issue with attention_xformers.
2024-11-20 17:07:46 -05:00
patientx
cf71c14e18
Merge branch 'comfyanonymous:master' into master
2024-11-20 15:50:10 +03:00
comfyanonymous
22535d0589
Skip layer guidance now works on stable audio model.
2024-11-20 07:33:06 -05:00
patientx
16d2a4b2e9
Merge branch 'comfyanonymous:master' into master
2024-11-19 13:20:18 +03:00
comfyanonymous
b699a15062
Refactor inpaint/ip2p code.
2024-11-19 03:25:25 -05:00
patientx
b966e670b2
Merge branch 'comfyanonymous:master' into master
2024-11-17 16:39:36 +03:00
comfyanonymous
d9f90965c8
Support block replace patches in auraflow.
2024-11-17 08:19:59 -05:00
patientx
56ee9e4e7f
Merge branch 'comfyanonymous:master' into master
2024-11-17 15:26:50 +03:00
comfyanonymous
41886af138
Add transformer options blocks replace patch to mochi.
2024-11-16 20:48:14 -05:00
patientx
45d8f6570b
Merge branch 'comfyanonymous:master' into master
2024-11-13 16:39:33 +03:00
comfyanonymous
3b9a6cf2b1
Fix issue with 3d masks.
2024-11-13 07:18:30 -05:00
patientx
10a97b0bb8
Merge branch 'comfyanonymous:master' into master
2024-11-12 20:22:47 +03:00
comfyanonymous
8ebf2d8831
Add block replace transformer_options to flux.
2024-11-12 08:00:39 -05:00
patientx
f6bd5cbac0
Merge branch 'comfyanonymous:master' into master
2024-11-11 23:56:06 +03:00
comfyanonymous
eb476e6ea9
Allow 1D masks for 1D latents.
2024-11-11 14:44:52 -05:00
patientx
96bccdce39
Merge branch 'comfyanonymous:master' into master
2024-11-11 13:42:59 +03:00
comfyanonymous
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
comfyanonymous
2a18e98ccf
Refactor so that zsnr can be set in the sampling_settings.
2024-11-11 04:55:56 -05:00
patientx
0e27132a6b
Merge branch 'comfyanonymous:master' into master
2024-11-10 14:18:31 +03:00
comfyanonymous
bdeb1c171c
Fast previews for mochi.
2024-11-10 03:39:35 -05:00
patientx
5e34d323e7
Merge branch 'comfyanonymous:master' into master
2024-11-09 23:54:25 +03:00
comfyanonymous
8b90e50979
Properly handle and reshape masks when used on 3d latents.
2024-11-09 15:30:19 -05:00
patientx
0a02622727
Merge branch 'comfyanonymous:master' into master
2024-11-07 17:20:02 +03:00
comfyanonymous
2865f913f7
Free memory before doing tiled decode.
2024-11-07 04:01:24 -05:00
comfyanonymous
b49616f951
Make VAEDecodeTiled node work with video VAEs.
2024-11-07 03:47:12 -05:00
patientx
79efdb2f49
Merge branch 'comfyanonymous:master' into master
2024-11-06 14:07:38 +03:00
comfyanonymous
5e29e7a488
Remove scaled_fp8 key after reading it to silence warning.
2024-11-06 04:56:42 -05:00
patientx
c47638883b
Merge branch 'comfyanonymous:master' into master
2024-11-05 13:28:41 +03:00
comfyanonymous
8afb97cd3f
Fix unknown VAE being detected as the mochi VAE.
2024-11-05 03:43:27 -05:00
patientx
b6549b7520
Merge branch 'comfyanonymous:master' into master
2024-11-04 23:34:40 +03:00
contentis
69694f40b3
fix dynamic shape export ( #5490 )
2024-11-04 14:59:28 -05:00
patientx
59e2c35d29
Merge branch 'comfyanonymous:master' into master
2024-11-03 12:27:43 +03:00
comfyanonymous
6c9dbde7de
Fix mochi all in one checkpoint t5xxl key names.
2024-11-03 01:40:42 -05:00
patientx
cd78a77c63
Merge branch 'comfyanonymous:master' into master
2024-11-02 12:52:55 +03:00
comfyanonymous
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
patientx
4526136e42
Merge branch 'comfyanonymous:master' into master
2024-11-01 13:16:45 +03:00
Aarni Koskela
1c8286a44b
Avoid SyntaxWarning in UniPC docstring ( #5442 )
2024-10-31 15:17:26 -04:00
comfyanonymous
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
patientx
ae77a32cbb
Merge branch 'comfyanonymous:master' into master
2024-10-31 00:21:29 +03:00
comfyanonymous
daa1565b93
Fix diffusers flux controlnet regression.
2024-10-30 13:11:34 -04:00
patientx
29e7e35ac6
Merge branch 'comfyanonymous:master' into master
2024-10-30 11:58:03 +03:00
comfyanonymous
09fdb2b269
Support SD3.5 medium diffusers format weights and loras.
2024-10-30 04:24:00 -04:00
patientx
587b27ff26
Merge branch 'comfyanonymous:master' into master
2024-10-29 10:26:41 +03:00
comfyanonymous
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
comfyanonymous
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
patientx
0c5e42230d
Merge branch 'comfyanonymous:master' into master
2024-10-29 01:01:45 +03:00
comfyanonymous
c320801187
Remove useless line.
2024-10-28 17:41:12 -04:00
patientx
7b1f4c1094
Merge branch 'comfyanonymous:master' into master
2024-10-28 10:57:25 +03:00
comfyanonymous
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
patientx
b712d0a718
Merge branch 'comfyanonymous:master' into master
2024-10-27 13:14:51 +03:00
comfyanonymous
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
patientx
0575f60ac4
Merge branch 'comfyanonymous:master' into master
2024-10-26 15:10:27 +03:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
c3ffbae067
Make LatentUpscale nodes work on 3d latents.
2024-10-26 01:50:51 -04:00
comfyanonymous
d605677b33
Make euler_ancestral work on flow models (credit: Ashen).
2024-10-25 19:53:44 -04:00
patientx
6a9115d7ea
Merge branch 'comfyanonymous:master' into master
2024-10-24 11:06:56 +03:00
PsychoLogicAu
af8cf79a2d
support SimpleTuner lycoris lora for SD3 ( #5340 )
2024-10-24 01:18:32 -04:00
patientx
bb7e9ef812
Merge branch 'comfyanonymous:master' into master
2024-10-24 00:06:13 +03:00
comfyanonymous
66b0961a46
Fix ControlLora issue with last commit.
2024-10-23 17:02:40 -04:00
patientx
ca700c7638
Merge branch 'comfyanonymous:master' into master
2024-10-23 23:58:26 +03:00
comfyanonymous
754597c8a9
Clean up some controlnet code.
...
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
patientx
fd143ca944
Merge branch 'comfyanonymous:master' into master
2024-10-23 00:19:39 +03:00
comfyanonymous
915fdb5745
Fix lowvram edge case.
2024-10-22 16:34:50 -04:00
contentis
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
patientx
f0e8767deb
Merge branch 'comfyanonymous:master' into master
2024-10-22 13:39:15 +03:00
comfyanonymous
8ce2a1052c
Optimizations to --fast and scaled fp8.
2024-10-22 02:12:28 -04:00
comfyanonymous
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
patientx
9fd46200ab
Merge branch 'comfyanonymous:master' into master
2024-10-21 12:23:49 +03:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
patientx
5a425aeda1
Merge branch 'comfyanonymous:master' into master
2024-10-20 21:06:24 +03:00
comfyanonymous
f9f9faface
Fixed model merging issue with scaled fp8.
2024-10-20 06:24:31 -04:00
patientx
1678ea8f9c
Merge branch 'comfyanonymous:master' into master
2024-10-20 10:20:06 +03:00
comfyanonymous
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
comfyanonymous
73e3a9e676
Clamp output when rounding weight to prevent Nan.
2024-10-19 19:07:10 -04:00
patientx
d4b509799f
Merge branch 'comfyanonymous:master' into master
2024-10-18 11:14:20 +03:00
comfyanonymous
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
patientx
fc4acf26c3
Merge branch 'comfyanonymous:master' into master
2024-10-16 23:54:39 +03:00
comfyanonymous
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe .
2024-10-16 12:36:19 -04:00
patientx
f143a803d6
Merge branch 'comfyanonymous:master' into master
2024-10-15 09:55:21 +03:00
comfyanonymous
f584758271
Cleanup some useless lines.
2024-10-14 21:02:39 -04:00
svdc
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
patientx
a5e3eae103
Merge branch 'comfyanonymous:master' into master
2024-10-12 23:00:55 +03:00
comfyanonymous
3c60ecd7a8
Fix fp8 ops staying enabled.
2024-10-12 14:10:13 -04:00
comfyanonymous
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
patientx
eae1c15ab1
Merge branch 'comfyanonymous:master' into master
2024-10-12 11:02:28 +03:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
Kadir Nar
ad07796777
🐛 Add device to variable c ( #5210 )
2024-10-11 20:37:50 -04:00
patientx
e4d24788f1
Merge branch 'comfyanonymous:master' into master
2024-10-11 00:22:45 +03:00
comfyanonymous
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
patientx
f9eab05f54
Merge branch 'comfyanonymous:master' into master
2024-10-10 10:30:17 +03:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
Jonathan Avila
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
patientx
21d9bef158
Merge branch 'comfyanonymous:master' into master
2024-10-09 11:22:05 +03:00
comfyanonymous
203942c8b2
Fix flux doras with diffusers keys.
2024-10-08 19:03:40 -04:00
patientx
df174b7cbd
Merge branch 'comfyanonymous:master' into master
2024-10-07 22:22:38 +03:00
comfyanonymous
8dfa0cc552
Make SD3 fast previews a little better.
2024-10-07 09:19:59 -04:00
patientx
445390eae2
Merge branch 'comfyanonymous:master' into master
2024-10-07 10:15:48 +03:00
comfyanonymous
e5ecdfdd2d
Make fast previews for SDXL a little better by adding a bias.
2024-10-06 19:27:04 -04:00
comfyanonymous
7d29fbf74b
Slightly improve the fast previews for flux by adding a bias.
2024-10-06 17:55:46 -04:00
patientx
dfa888ea5e
Merge branch 'comfyanonymous:master' into master
2024-10-05 21:09:30 +03:00
comfyanonymous
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
patientx
613a320875
Merge branch 'comfyanonymous:master' into master
2024-10-05 00:01:10 +03:00
comfyanonymous
6f021d8aa0
Let --verbose have an argument for the log level.
2024-10-04 10:05:34 -04:00
patientx
9ec6b86747
Merge branch 'comfyanonymous:master' into master
2024-10-03 22:16:09 +03:00
comfyanonymous
d854ed0bcf
Allow using SD3 type te output on flux model.
2024-10-03 09:44:54 -04:00
comfyanonymous
abcd006b8c
Allow more permutations of clip/t5 in dual clip loader.
2024-10-03 09:26:11 -04:00
patientx
4c9c09b888
Merge branch 'comfyanonymous:master' into master
2024-10-02 13:35:22 +03:00
comfyanonymous
d985d1d7dc
CLIP Loader node now supports clip_l and clip_g only for SD3.
2024-10-02 04:25:17 -04:00
patientx
8e0250f3d1
Merge branch 'comfyanonymous:master' into master
2024-10-01 19:40:37 +03:00
comfyanonymous
d1cdf51e1b
Refactor some of the TE detection code.
2024-10-01 07:08:41 -04:00
patientx
305c31142b
Merge branch 'comfyanonymous:master' into master
2024-10-01 11:55:23 +03:00
comfyanonymous
b4626ab93e
Add simpletuner lycoris format for SD unet.
2024-09-30 06:03:27 -04:00
patientx
fa20ab0287
Merge branch 'comfyanonymous:master' into master
2024-09-29 23:53:36 +03:00
comfyanonymous
a9e459c2a4
Use torch.nn.functional.linear in RGB preview code.
...
Add an optional bias to the latent RGB preview code.
2024-09-29 11:27:49 -04:00
patientx
9a0f99ba71
Merge branch 'comfyanonymous:master' into master
2024-09-28 22:28:25 +03:00
comfyanonymous
3bb4dec720
Fix issue with loras, lowvram and --fast fp8.
2024-09-28 14:42:32 -04:00
City
8733191563
Flux torch.compile fix ( #5082 )
2024-09-27 22:07:51 -04:00
patientx
d82af09346
Merge branch 'comfyanonymous:master' into master
2024-09-25 09:59:21 +03:00
comfyanonymous
bdd4a22a2e
Fix flux TE not loading t5 embeddings.
2024-09-24 22:57:22 -04:00
patientx
513f7cfda0
Merge branch 'comfyanonymous:master' into master
2024-09-24 10:58:58 +03:00
chaObserv
479a427a48
Add dpmpp_2m_cfg_pp ( #4992 )
2024-09-24 02:42:56 -04:00
patientx
ba87171925
Merge branch 'comfyanonymous:master' into master
2024-09-23 13:49:57 +03:00
comfyanonymous
3a0eeee320
Make --listen listen on both ipv4 and ipv6 at the same time by default.
2024-09-23 04:38:19 -04:00
patientx
6a1e215ad2
Merge branch 'comfyanonymous:master' into master
2024-09-23 10:07:44 +03:00
comfyanonymous
9c41bc8d10
Remove useless line.
2024-09-23 02:32:29 -04:00
patientx
63c21e0bfa
Merge branch 'comfyanonymous:master' into master
2024-09-22 13:35:44 +03:00
comfyanonymous
7a415f47a9
Add an optional VAE input to the ControlNetApplyAdvanced node.
...
Deprecate the other controlnet nodes.
2024-09-22 01:24:52 -04:00
patientx
ae611c9b61
Merge branch 'comfyanonymous:master' into master
2024-09-21 13:51:37 +03:00
comfyanonymous
dc96a1ae19
Load controlnet in fp8 if weights are in fp8.
2024-09-21 04:50:12 -04:00
comfyanonymous
2d810b081e
Add load_controlnet_state_dict function.
2024-09-21 01:51:51 -04:00
comfyanonymous
9f7e9f0547
Add an error message when a controlnet needs a VAE but none is given.
2024-09-21 01:33:18 -04:00
patientx
d3c8252c48
Merge branch 'comfyanonymous:master' into master
2024-09-20 10:14:16 +03:00
comfyanonymous
70a708d726
Fix model merging issue.
2024-09-20 02:31:44 -04:00
yoinked
e7d4782736
add laplace scheduler [2407.03297] ( #4990 )
...
* add laplace scheduler [2407.03297]
* should be here instead lol
* better settings
2024-09-19 23:23:09 -04:00
patientx
9e8686df8d
Merge branch 'comfyanonymous:master' into master
2024-09-19 19:57:21 +03:00
comfyanonymous
ad66f7c7d8
Add model_options to load_controlnet function.
2024-09-19 08:23:35 -04:00
Simon Lui
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. ( #4978 )
2024-09-19 05:11:42 -04:00
patientx
4c62b6d8f0
Merge branch 'comfyanonymous:master' into master
2024-09-17 11:02:10 +03:00
pharmapsychotic
0b7dfa986d
Improve tiling calculations to reduce number of tiles that need to be processed. ( #4944 )
2024-09-17 03:51:10 -04:00
comfyanonymous
d514bb38ee
Add some option to model_options for the text encoder.
...
load_device, offload_device and the initial_device can now be set.
2024-09-17 03:49:54 -04:00
comfyanonymous
0849c80e2a
get_key_patches now works without unloading the model.
2024-09-17 01:57:59 -04:00
patientx
8bcf408c79
Merge branch 'comfyanonymous:master' into master
2024-09-15 16:41:23 +03:00
comfyanonymous
e813abbb2c
Long CLIP L support for SDXL, SD3 and Flux.
...
Use the *CLIPLoader nodes.
2024-09-15 07:59:38 -04:00
patientx
b9a24c0146
Merge branch 'comfyanonymous:master' into master
2024-09-14 16:14:57 +03:00
comfyanonymous
f48e390032
Support AliMama SD3 and Flux inpaint controlnets.
...
Use the ControlNetInpaintingAliMamaApply node.
2024-09-14 09:05:16 -04:00
patientx
2710ea1aa2
Merge branch 'comfyanonymous:master' into master
2024-09-13 23:07:04 +03:00
comfyanonymous
cf80d28689
Support loading controlnets with different input.
2024-09-13 09:54:37 -04:00
patientx
8ca077b268
Merge branch 'comfyanonymous:master' into master
2024-09-12 16:48:54 +03:00
Robin Huang
b962db9952
Add cli arg to override user directory ( #4856 )
...
* Override user directory.
* Use overridden user directory.
* Remove prints.
* Remove references to global user_files.
* Remove unused replace_folder function.
* Remove newline.
* Remove global during get_user_directory.
* Add validation.
2024-09-12 08:10:27 -04:00
patientx
73c31987c6
Merge branch 'comfyanonymous:master' into master
2024-09-12 12:44:27 +03:00
comfyanonymous
9d720187f1
types -> comfy_types to fix import issue.
2024-09-12 03:57:46 -04:00
patientx
8eb7ca051a
Merge branch 'comfyanonymous:master' into master
2024-09-11 10:26:46 +03:00
comfyanonymous
9f4daca9d9
Doesn't really make sense for cfg_pp sampler to call regular one.
2024-09-11 02:51:36 -04:00
yoinked
b5d0f2a908
Add CFG++ to DPM++ 2S Ancestral ( #3871 )
...
* Update sampling.py
* Update samplers.py
* my bad
* "fix" the sampler
* Update samplers.py
* i named it wrong
* minor sampling improvements
mainly using a dynamic rho value (hey this sounds a lot like smea!!!)
* revert rho change
rho? r? its just 1/2
2024-09-11 02:49:44 -04:00
patientx
c4e18b7206
Merge branch 'comfyanonymous:master' into master
2024-09-08 22:20:50 +03:00
comfyanonymous
9c5fca75f4
Fix lora issue.
2024-09-08 10:10:47 -04:00
comfyanonymous
32a60a7bac
Support onetrainer text encoder Flux lora.
2024-09-08 09:31:41 -04:00
patientx
52f858d715
Merge branch 'comfyanonymous:master' into master
2024-09-07 14:47:35 +03:00
Jim Winkens
bb52934ba4
Fix import issue ( #4815 )
2024-09-07 05:28:32 -04:00
patientx
962638c9dc
Merge branch 'comfyanonymous:master' into master
2024-09-07 11:04:57 +03:00
comfyanonymous
ea77750759
Support a generic Comfy format for text encoder loras.
...
This is a format with keys like:
text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj.lora_up.weight
Instead of waiting for me to add support for specific lora formats you can
convert your text encoder loras to this format instead.
If you want to see an example save a text encoder lora with the SaveLora
node with the commit right after this one.
2024-09-07 02:20:39 -04:00
patientx
bc054d012b
Merge branch 'comfyanonymous:master' into master
2024-09-06 10:58:13 +03:00
comfyanonymous
c27ebeb1c2
Fix onnx export not working on flux.
2024-09-06 03:21:52 -04:00
patientx
6fdbaf1a76
Merge branch 'comfyanonymous:master' into master
2024-09-05 12:04:05 +03:00
comfyanonymous
5cbaa9e07c
Mistoline flux controlnet support.
2024-09-05 00:05:17 -04:00
comfyanonymous
c7427375ee
Prioritize freeing partially offloaded models first.
2024-09-04 19:47:32 -04:00
patientx
894c727ce2
Update model_management.py
2024-09-05 00:05:54 +03:00
patientx
b518390241
Merge branch 'comfyanonymous:master' into master
2024-09-04 22:36:12 +03:00
Jedrzej Kosinski
f04229b84d
Add emb_patch support to UNetModel forward ( #4779 )
2024-09-04 14:35:15 -04:00
patientx
64f428801e
Merge branch 'comfyanonymous:master' into master
2024-09-04 09:29:56 +03:00
Silver
f067ad15d1
Make live preview size a configurable launch argument ( #4649 )
...
* Make live preview size a configurable launch argument
* Remove import from testing phase
* Update cli_args.py
2024-09-03 19:16:38 -04:00
comfyanonymous
483004dd1d
Support newer glora format.
2024-09-03 17:02:19 -04:00
patientx
88ccc8f3a5
Merge branch 'comfyanonymous:master' into master
2024-09-03 11:01:28 +03:00
comfyanonymous
00a5d08103
Lower fp8 lora memory usage.
2024-09-03 01:25:05 -04:00
patientx
f2122a355b
Merge branch 'comfyanonymous:master' into master
2024-09-02 16:06:23 +03:00
comfyanonymous
d043997d30
Flux onetrainer lora.
2024-09-02 08:22:15 -04:00
patientx
93fa5c9ebb
Merge branch 'comfyanonymous:master' into master
2024-09-02 10:03:48 +03:00
comfyanonymous
8d31a6632f
Speed up inference on nvidia 10 series on Linux.
2024-09-01 17:29:31 -04:00
patientx
f02c0d3ed9
Merge branch 'comfyanonymous:master' into master
2024-09-01 14:34:56 +03:00
comfyanonymous
b643eae08b
Make minimum_inference_memory() depend on --reserve-vram
2024-09-01 01:18:34 -04:00
patientx
acc3d6a2ea
Update model_management.py
2024-08-30 20:13:28 +03:00
patientx
51af2440ef
Update model_management.py
2024-08-30 20:10:47 +03:00
patientx
3e226f02f3
Update model_management.py
2024-08-30 20:08:18 +03:00
comfyanonymous
935ae153e1
Cleanup.
2024-08-30 12:53:59 -04:00
patientx
aeab6d1370
Merge branch 'comfyanonymous:master' into master
2024-08-30 19:49:03 +03:00
Chenlei Hu
e91662e784
Get logs endpoint & system_stats additions ( #4690 )
...
* Add route for getting output logs
* Include ComfyUI version
* Move to own function
* Changed to memory logger
* Unify logger setup logic
* Fix get version git fallback
---------
Co-authored-by: pythongosssss <125205205+pythongosssss@users.noreply.github.com>
2024-08-30 12:46:37 -04:00
patientx
d8c04b9022
Merge branch 'comfyanonymous:master' into master
2024-08-30 19:42:36 +03:00
patientx
524cd140b5
removed bfloat from flux model support, resulting in 2x speedup
2024-08-30 13:33:32 +03:00
patientx
a8652a052f
Merge branch 'comfyanonymous:master' into master
2024-08-30 12:14:01 +03:00
comfyanonymous
63fafaef45
Fix potential issue with hydit controlnets.
2024-08-30 04:58:41 -04:00
comfyanonymous
6eb5d64522
Fix glora lowvram issue.
2024-08-29 19:07:23 -04:00
comfyanonymous
10a79e9898
Implement model part of flux union controlnet.
2024-08-29 18:41:22 -04:00
patientx
a110c83af7
Merge branch 'comfyanonymous:master' into master
2024-08-29 20:51:26 +03:00
patientx
02c34de8b1
Merge branch 'comfyanonymous:master' into master
2024-08-29 11:55:26 +03:00
comfyanonymous
ea3f39bd69
InstantX depth flux controlnet.
2024-08-29 02:14:19 -04:00
comfyanonymous
b33cd61070
InstantX canny controlnet.
2024-08-28 19:02:50 -04:00
patientx
39c3ef9d66
Merge branch 'comfyanonymous:master' into master
2024-08-29 01:34:07 +03:00
comfyanonymous
d31e226650
Unify RMSNorm code.
2024-08-28 16:56:38 -04:00
patientx
7056a6aa6f
Merge branch 'comfyanonymous:master' into master
2024-08-28 09:36:30 +03:00
comfyanonymous
38c22e631a
Fix case where model was not properly unloaded in merging workflows.
2024-08-27 19:03:51 -04:00
patientx
bdd77b243b
Merge branch 'comfyanonymous:master' into master
2024-08-27 21:29:27 +03:00
Chenlei Hu
6bbdcd28ae
Support weight padding on diff weight patch ( #4576 )
2024-08-27 13:55:37 -04:00
patientx
2feaa21954
Merge branch 'comfyanonymous:master' into master
2024-08-27 20:24:31 +03:00
comfyanonymous
ab130001a8
Do RMSNorm in native type.
2024-08-27 02:41:56 -04:00
patientx
4193c15afe
Merge branch 'comfyanonymous:master' into master
2024-08-26 22:56:02 +03:00
comfyanonymous
2ca8f6e23d
Make the stochastic fp8 rounding reproducible.
2024-08-26 15:12:06 -04:00
comfyanonymous
7985ff88b9
Use less memory in float8 lora patching by doing calculations in fp16.
2024-08-26 14:45:58 -04:00
patientx
58594a0b47
Merge branch 'comfyanonymous:master' into master
2024-08-26 14:29:57 +03:00
comfyanonymous
c6812947e9
Fix potential memory leak.
2024-08-26 02:07:32 -04:00
patientx
902d97af7d
Merge branch 'comfyanonymous:master' into master
2024-08-25 23:35:11 +03:00
comfyanonymous
9230f65823
Fix some controlnets OOMing when loading.
2024-08-25 05:54:29 -04:00
patientx
c60a87396b
Merge branch 'comfyanonymous:master' into master
2024-08-24 11:31:17 +03:00
comfyanonymous
8ae23d8e80
Fix onnx export.
2024-08-23 17:52:47 -04:00
patientx
134569ea48
Update model_management.py
2024-08-23 14:10:09 +03:00
patientx
c98e8a0a55
Merge branch 'comfyanonymous:master' into master
2024-08-23 12:31:51 +03:00
comfyanonymous
7df42b9a23
Fix dora.
2024-08-23 04:58:59 -04:00
comfyanonymous
5d8bbb7281
Cleanup.
2024-08-23 04:06:27 -04:00
patientx
9f87d61bfe
Merge branch 'comfyanonymous:master' into master
2024-08-23 11:04:56 +03:00
comfyanonymous
2c1d2375d6
Fix.
2024-08-23 04:04:55 -04:00
Simon Lui
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). ( #4562 )
2024-08-23 03:59:57 -04:00
Scorpinaus
9465b23432
Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function ( #4565 )
2024-08-23 03:57:08 -04:00
patientx
1ef90b7ac8
Merge branch 'comfyanonymous:master' into master
2024-08-23 00:55:19 +03:00
comfyanonymous
c0b0da264b
Missing imports.
2024-08-22 17:20:51 -04:00
comfyanonymous
c26ca27207
Move calculate function to comfy.lora
2024-08-22 17:12:00 -04:00
comfyanonymous
7c6bb84016
Code cleanups.
2024-08-22 17:05:12 -04:00
patientx
dec75f11e4
Merge branch 'comfyanonymous:master' into master
2024-08-22 23:36:58 +03:00
comfyanonymous
c54d3ed5e6
Fix issue with models staying loaded in memory.
2024-08-22 15:58:20 -04:00
comfyanonymous
c7ee4b37a1
Try to fix some lora issues.
2024-08-22 15:32:18 -04:00
David
7b70b266d8
Generalize MacOS version check for force-upcast-attention ( #4548 )
...
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.
I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.
See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
comfyanonymous
8f60d093ba
Fix issue.
2024-08-22 10:38:24 -04:00
patientx
0cd8a740bb
Merge branch 'comfyanonymous:master' into master
2024-08-22 14:01:42 +03:00
comfyanonymous
843a7ff70c
fp16 is actually faster than fp32 on a GTX 1080.
2024-08-21 23:23:50 -04:00
patientx
febf8601dc
Merge branch 'comfyanonymous:master' into master
2024-08-22 00:07:14 +03:00
comfyanonymous
a60620dcea
Fix slow performance on 10 series Nvidia GPUs.
2024-08-21 16:39:02 -04:00
comfyanonymous
015f73dc49
Try a different type of flux fp16 fix.
2024-08-21 16:17:15 -04:00
comfyanonymous
904bf58e7d
Make --fast work on pytorch nightly.
2024-08-21 14:01:41 -04:00
patientx
0774774bb9
Merge branch 'comfyanonymous:master' into master
2024-08-21 19:19:41 +03:00
Svein Ove Aas
5f50263088
Replace use of .view with .reshape ( #4522 )
...
When generating images with fp8_e4_m3 Flux and batch size >1, using --fast, ComfyUI throws a "view size is not compatible with input tensor's size and stride" error pointing at the first of these two calls to view.
As reshape is semantically equivalent to view except for working on a broader set of inputs, there should be no downside to changing this. The only difference is that it clones the underlying data in cases where .view would error out. I have confirmed that the output still looks as expected, but cannot confirm that no mutable use is made of the tensors anywhere.
Note that --fast is only marginally faster than the default.
2024-08-21 11:21:48 -04:00
patientx
ac75d4e4e0
Merge branch 'comfyanonymous:master' into master
2024-08-21 09:49:29 +03:00
comfyanonymous
03ec517afb
Remove useless line, adjust windows default reserved vram.
2024-08-21 00:47:19 -04:00
comfyanonymous
510f3438c1
Speed up fp8 matrix mult by using better code.
2024-08-20 22:53:26 -04:00
patientx
5656b5b956
Merge branch 'comfyanonymous:master' into master
2024-08-20 23:07:54 +03:00
comfyanonymous
ea63b1c092
Simpletrainer lycoris format.
2024-08-20 12:05:13 -04:00
comfyanonymous
9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
comfyanonymous
d1a6bd6845
Support loading long clipl model with the CLIP loader node.
2024-08-20 10:46:36 -04:00
comfyanonymous
83dbac28eb
Properly set if clip text pooled projection instead of using hack.
2024-08-20 10:46:36 -04:00
comfyanonymous
538cb068bc
Make cast_to a nop if weight is already good.
2024-08-20 10:46:36 -04:00
comfyanonymous
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
patientx
9727da93ea
Merge branch 'comfyanonymous:master' into master
2024-08-20 12:35:06 +03:00
comfyanonymous
9eee470244
New load_text_encoder_state_dicts function.
...
Now you can load text encoders straight from a list of state dicts.
2024-08-19 17:36:35 -04:00
patientx
b20f5b1e32
Merge branch 'comfyanonymous:master' into master
2024-08-20 00:31:41 +03:00
comfyanonymous
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
comfyanonymous
4d341b78e8
Bug fixes.
2024-08-19 16:28:55 -04:00
patientx
9baf36e97b
Merge branch 'comfyanonymous:master' into master
2024-08-19 22:54:45 +03:00
comfyanonymous
6138f92084
Use better dtype for the lowvram lora system.
2024-08-19 15:35:25 -04:00
comfyanonymous
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
comfyanonymous
4506ddc86a
Better subnormal fp8 stochastic rounding. Thanks Ashen.
2024-08-19 13:38:03 -04:00
comfyanonymous
20ace7c853
Code cleanup.
2024-08-19 12:48:59 -04:00
patientx
74c8545fa6
Merge branch 'comfyanonymous:master' into master
2024-08-19 17:20:41 +03:00
comfyanonymous
22ec02afc0
Handle subnormal numbers in float8 rounding.
2024-08-19 05:51:08 -04:00
patientx
eb8d7f86d1
Merge branch 'comfyanonymous:master' into master
2024-08-19 10:02:52 +03:00
comfyanonymous
39f114c44b
Less broken non blocking?
2024-08-18 16:53:17 -04:00