patientx
03825eaa28
Merge branch 'comfyanonymous:master' into master
2025-03-06 22:23:13 +03:00
comfyanonymous
dfa36e6855
Fix some things breaking when embeddings fail to apply.
2025-03-06 13:31:55 -05:00
patientx
44c060b3de
Merge branch 'comfyanonymous:master' into master
2025-03-06 12:16:31 +03:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
comfyanonymous
0bef826a98
Support llava clip vision model.
2025-03-06 00:24:43 -05:00
comfyanonymous
85ef295069
Make applying embeddings more efficient.
...
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
patientx
199e91029d
Merge branch 'comfyanonymous:master' into master
2025-03-05 23:54:19 +03:00
Chenlei Hu
5d84607bf3
Add type hint for FileLocator ( #6968 )
...
* Add type hint for FileLocator
* nit
2025-03-05 15:35:26 -05:00
Silver
c1909f350f
Better argument handling of front-end-root ( #7043 )
...
* Better argument handling of front-end-root
Improves handling of front-end-root launch argument. Several instances where users have set it and ComfyUI launches as normal and completely disregards the launch arg which doesn't make sense. Better to indicate to user that something is incorrect.
* Removed unused import
There was no real reason to use "Optional" typing in ther front-end-root argument.
2025-03-05 15:34:22 -05:00
Chenlei Hu
52b3469606
[NodeDef] Explicitly add control_after_generate to seed/noise_seed ( #7059 )
...
* [NodeDef] Explicitly add control_after_generate to seed/noise_seed
* Update comfy/comfy_types/node_typing.py
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
---------
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
2025-03-05 15:33:23 -05:00
patientx
c36a942c12
Merge branch 'comfyanonymous:master' into master
2025-03-05 19:11:25 +03:00
comfyanonymous
369b079ff6
Fix lowvram issue with ltxv vae.
2025-03-05 05:26:08 -05:00
comfyanonymous
9c9a7f012a
Adjust ltxv memory factor.
2025-03-05 05:16:05 -05:00
patientx
93001919fa
Merge branch 'comfyanonymous:master' into master
2025-03-05 11:17:58 +03:00
comfyanonymous
93fedd92fe
Support LTXV 0.9.5.
...
Credits: Lightricks team.
2025-03-05 00:13:49 -05:00
patientx
586eabb95d
Merge branch 'comfyanonymous:master' into master
2025-03-04 19:10:54 +03:00
comfyanonymous
65042f7d39
Make it easier to set a custom template for hunyuan video.
2025-03-04 09:26:05 -05:00
patientx
cacb8da101
Merge branch 'comfyanonymous:master' into master
2025-03-04 13:17:37 +03:00
comfyanonymous
7c7c70c400
Refactor skyreels i2v code.
2025-03-04 00:15:45 -05:00
patientx
f12bcde392
Merge branch 'comfyanonymous:master' into master
2025-03-03 15:10:56 +03:00
comfyanonymous
f86c724ef2
Temporal area composition.
...
New ConditioningSetAreaPercentageVideo node.
2025-03-03 06:50:31 -05:00
patientx
6c34d0a58a
Update zluda.py
2025-03-02 22:43:11 +03:00
patientx
7763bf823e
Merge branch 'comfyanonymous:master' into master
2025-03-02 17:18:12 +03:00
comfyanonymous
9af6320ec9
Make 2d area composition nodes work on video models.
2025-03-02 08:19:16 -05:00
patientx
0c4bebf5fb
Merge branch 'comfyanonymous:master' into master
2025-03-01 14:59:20 +03:00
comfyanonymous
4dc6709307
Rename argument in last commit and document the options.
2025-03-01 02:43:49 -05:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
patientx
c235a51d82
Update zluda.py
2025-02-28 16:41:56 +03:00
patientx
af43425ab5
Update model_management.py
2025-02-28 16:37:55 +03:00
patientx
1871a594ba
Merge branch 'comfyanonymous:master' into master
2025-02-28 11:47:19 +03:00
comfyanonymous
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
patientx
cc65ca4c42
Merge branch 'comfyanonymous:master' into master
2025-02-28 00:39:40 +03:00
comfyanonymous
f4dac8ab6f
Wan code small cleanup.
2025-02-27 07:22:42 -05:00
patientx
c4fb9f2a63
Merge branch 'comfyanonymous:master' into master
2025-02-27 13:06:17 +03:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
comfyanonymous
3ea3bc8546
Fix wan issues when prompt length is long.
2025-02-26 20:34:02 -05:00
patientx
b04a1f4127
Merge branch 'comfyanonymous:master' into master
2025-02-27 01:24:29 +03:00
comfyanonymous
0270a0b41c
Reduce artifacts on Wan by doing the patch embedding in fp32.
2025-02-26 16:59:26 -05:00
patientx
4f968c3c56
Merge branch 'comfyanonymous:master' into master
2025-02-26 17:11:50 +03:00
comfyanonymous
c37f15f98e
Add fast preview support for Wan models.
2025-02-26 08:56:23 -05:00
patientx
1193f3fbb1
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:47:39 +03:00
comfyanonymous
4bca7367f3
Don't try to use clip_fea on t2v model.
2025-02-26 08:38:09 -05:00
patientx
debf69185c
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:00:33 +03:00
comfyanonymous
b6fefe686b
Better wan memory estimation.
2025-02-26 07:51:22 -05:00
patientx
583f140eda
Merge branch 'comfyanonymous:master' into master
2025-02-26 13:26:25 +03:00
comfyanonymous
fa62287f1f
More code reuse in wan.
...
Fix bug when changing the compute dtype on wan.
2025-02-26 05:22:29 -05:00
patientx
743996a1f7
Merge branch 'comfyanonymous:master' into master
2025-02-26 12:56:06 +03:00
comfyanonymous
0844998db3
Slightly better wan i2v mask implementation.
2025-02-26 03:49:50 -05:00
comfyanonymous
4ced06b879
WIP support for Wan I2V model.
2025-02-26 01:49:43 -05:00
patientx
1e91ff59a1
Merge branch 'comfyanonymous:master' into master
2025-02-26 09:24:15 +03:00
comfyanonymous
cb06e9669b
Wan seems to work with fp16.
2025-02-25 21:37:12 -05:00
patientx
6e894524e2
Merge branch 'comfyanonymous:master' into master
2025-02-26 04:14:10 +03:00
comfyanonymous
9a66bb972d
Make wan work with all latent resolutions.
...
Cleanup some code.
2025-02-25 19:56:04 -05:00
patientx
4269943ac3
Merge branch 'comfyanonymous:master' into master
2025-02-26 03:13:47 +03:00
comfyanonymous
ea0f939df3
Fix issue with wan and other attention implementations.
2025-02-25 19:13:39 -05:00
comfyanonymous
f37551c1d2
Change wan rope implementation to the flux one.
...
Should be more compatible.
2025-02-25 19:11:14 -05:00
patientx
879db7bdfc
Merge branch 'comfyanonymous:master' into master
2025-02-26 02:07:25 +03:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
patientx
6cf0fdcc3c
Merge branch 'comfyanonymous:master' into master
2025-02-25 17:12:14 +03:00
comfyanonymous
f40076096e
Cleanup some lumina te code.
2025-02-25 04:10:26 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master
2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master
2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
patientx
c15fe75f7b
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic" ,error fix
2025-02-22 15:44:20 +03:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master
2025-02-22 14:42:22 +03:00
comfyanonymous
aff16532d4
Remove some useless code.
2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master
2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master
2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs ( #6891 )
...
* Fix link pointing to non-exisiting docs
The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.
* Update node_typing.py
2025-02-20 07:07:07 -05:00
patientx
a813e39d54
Merge branch 'comfyanonymous:master' into master
2025-02-19 16:19:26 +03:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. ( #6877 )
...
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.
This addresses a complain in the comments of acc152b674 .
2025-02-19 07:14:45 -05:00
patientx
4e6a5fc548
Merge branch 'comfyanonymous:master' into master
2025-02-19 13:52:34 +03:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master
2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master
2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master
2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master
2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master
2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py
2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py
2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
4d66aa9709
Merge branch 'comfyanonymous:master' into master
2025-02-14 11:00:12 +03:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention
2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master
2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master
2025-02-12 15:50:23 +03:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
patientx
196fc385e1
Merge branch 'comfyanonymous:master' into master
2025-02-11 16:38:17 +03:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
patientx
2a0bc66fed
Merge branch 'comfyanonymous:master' into master
2025-02-10 15:41:15 +03:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
patientx
9561b03cc7
Merge branch 'comfyanonymous:master' into master
2025-02-09 15:33:03 +03:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
patientx
9a9da027b2
Merge branch 'comfyanonymous:master' into master
2025-02-07 12:02:36 +03:00
comfyanonymous
079eccc92a
Don't compress http response by default.
...
Remove argument to disable it.
Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
patientx
4a1e3ee925
Merge branch 'comfyanonymous:master' into master
2025-02-06 14:33:29 +03:00
comfyanonymous
14880e6dba
Remove some useless code.
2025-02-06 05:00:37 -05:00
patientx
93c0fc3446
Update supported_models.py
2025-02-05 23:12:38 +03:00
patientx
f8c2ab631a
Merge branch 'comfyanonymous:master' into master
2025-02-05 23:10:21 +03:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301
Upcasting rope to fp32 seems to make no difference in this model.
2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5
Use regular numbers for rope in lumina model.
2025-02-05 04:17:25 -05:00
patientx
059629a5fb
Merge branch 'comfyanonymous:master' into master
2025-02-05 10:15:14 +03:00
comfyanonymous
a57d635c5f
Fix lumina 2 batches.
2025-02-04 21:48:11 -05:00
patientx
523b5352b8
Merge branch 'comfyanonymous:master' into master
2025-02-04 18:16:21 +03:00
comfyanonymous
8ac2dddeed
Lower the default shift of lumina to reduce artifacts.
2025-02-04 06:50:37 -05:00
patientx
b8ab0f2091
Merge branch 'comfyanonymous:master' into master
2025-02-04 12:32:22 +03:00
comfyanonymous
3e880ac709
Fix on python 3.9
2025-02-04 04:20:56 -05:00
comfyanonymous
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
patientx
bf081a208a
Merge branch 'comfyanonymous:master' into master
2025-02-02 18:27:25 +03:00
comfyanonymous
44e19a28d3
Use maximum negative value instead of -inf for masks in text encoders.
...
This is probably more correct.
2025-02-02 09:46:00 -05:00
patientx
a5eb46e557
Merge branch 'comfyanonymous:master' into master
2025-02-02 17:30:16 +03:00
Dr.Lt.Data
0a0df5f136
better guide message for sageattention ( #6634 )
2025-02-02 09:26:47 -05:00
KarryCharon
24d6871e47
add disable-compres-response-body cli args; add compress middleware; ( #6672 )
2025-02-02 09:24:55 -05:00
patientx
2ccb7dd301
Merge branch 'comfyanonymous:master' into master
2025-02-01 15:33:58 +03:00
comfyanonymous
9e1d301129
Only use stable cascade lora format with cascade model.
2025-02-01 06:35:22 -05:00
patientx
97f5a7d844
Merge branch 'comfyanonymous:master' into master
2025-01-30 23:03:44 +03:00
comfyanonymous
8d8dc9a262
Allow batch of different sigmas when noise scaling.
2025-01-30 06:49:52 -05:00
patientx
98cf486504
Merge branch 'comfyanonymous:master' into master
2025-01-29 18:53:00 +03:00
filtered
222f48c0f2
Allow changing folder_paths.base_path via command line argument. ( #6600 )
...
* Reimpl. CLI arg directly inside folder_paths.
* Update tests to use CLI arg mocking.
* Revert last-minute refactor.
* Fix test state polution.
2025-01-29 08:06:28 -05:00
patientx
a5c944f482
Merge branch 'comfyanonymous:master' into master
2025-01-28 19:20:13 +03:00
comfyanonymous
13fd4d6e45
More friendly error messages for corrupted safetensors files.
2025-01-28 09:41:09 -05:00
patientx
f77fea7fc6
Merge branch 'comfyanonymous:master' into master
2025-01-27 23:45:19 +03:00
comfyanonymous
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
patientx
1442e34d9e
Merge branch 'comfyanonymous:master' into master
2025-01-26 15:27:37 +03:00
comfyanonymous
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
patientx
73433fffa0
Merge branch 'comfyanonymous:master' into master
2025-01-24 15:16:37 +03:00
comfyanonymous
14ca5f5a10
Remove useless code.
2025-01-24 06:15:54 -05:00
patientx
076d037620
Merge branch 'comfyanonymous:master' into master
2025-01-23 13:56:49 +03:00
comfyanonymous
96e2a45193
Remove useless code.
2025-01-23 05:56:23 -05:00
Chenlei Hu
dfa2b6d129
Remove unused function lcm in conds.py ( #6572 )
2025-01-23 05:54:09 -05:00
patientx
d9515843d0
Merge branch 'comfyanonymous:master' into master
2025-01-23 02:28:08 +03:00
comfyanonymous
d6bbe8c40f
Remove support for python 3.8.
2025-01-22 17:04:30 -05:00
patientx
fc09aa398c
Merge branch 'comfyanonymous:master' into master
2025-01-22 15:04:13 +03:00
chaObserv
e857dd48b8
Add gradient estimation sampler ( #6554 )
2025-01-22 05:29:40 -05:00
patientx
3402d28bcd
Merge branch 'comfyanonymous:master' into master
2025-01-21 00:06:28 +03:00
comfyanonymous
fb2ad645a3
Add FluxDisableGuidance node to disable using the guidance embed.
2025-01-20 14:50:24 -05:00
patientx
39e164a441
Merge branch 'comfyanonymous:master' into master
2025-01-20 19:27:59 +03:00
comfyanonymous
d8a7a32779
Cleanup old TODO.
2025-01-20 03:44:13 -05:00
patientx
df418fc3fd
added rocm and hip hiding for certain nodes getting errors under zluda
2025-01-19 17:49:55 +03:00
patientx
88eae2b0d1
Merge branch 'comfyanonymous:master' into master
2025-01-19 15:11:53 +03:00
Sergii Dymchenko
ebf038d4fa
Use torch.special.expm1 ( #6388 )
...
* Use `torch.special.expm1`
This function provides greater precision than `exp(x) - 1` for small values of `x`.
Found with TorchFix https://github.com/pytorch-labs/torchfix/
* Use non-alias
2025-01-19 04:54:32 -05:00
patientx
8ee302c9dd
Merge branch 'comfyanonymous:master' into master
2025-01-19 02:40:53 +03:00
catboxanon
b1a02131c9
Remove comfy.samplers self-import ( #6506 )
2025-01-18 17:49:51 -05:00
patientx
e3207a0560
Merge branch 'comfyanonymous:master' into master
2025-01-18 16:08:27 +03:00
comfyanonymous
507199d9a8
Uni pc sampler now works with audio and video models.
2025-01-18 05:27:58 -05:00
comfyanonymous
2f3ab40b62
Add warning when using old pytorch versions.
2025-01-17 18:47:27 -05:00
patientx
5388be0f56
Merge branch 'comfyanonymous:master' into master
2025-01-17 09:41:17 +03:00
comfyanonymous
0aa2368e46
Fix some cosmos fp8 issues.
2025-01-16 17:45:37 -05:00
comfyanonymous
cca96a85ae
Fix cosmos VAE failing with videos longer than 121 frames.
2025-01-16 16:30:06 -05:00
patientx
4afa79a368
Merge branch 'comfyanonymous:master' into master
2025-01-16 17:23:00 +03:00
comfyanonymous
31831e6ef1
Code refactor.
2025-01-16 07:23:54 -05:00
comfyanonymous
88ceb28e20
Tweak hunyuan memory usage factor.
2025-01-16 06:31:03 -05:00
patientx
ed13b68e4f
Merge branch 'comfyanonymous:master' into master
2025-01-16 13:59:32 +03:00
comfyanonymous
23289a6a5c
Clean up some debug lines.
2025-01-16 04:24:39 -05:00
comfyanonymous
9d8b6c1f46
More accurate memory estimation for cosmos and hunyuan video.
2025-01-16 03:48:40 -05:00
patientx
a779e34c5b
Merge branch 'comfyanonymous:master' into master
2025-01-16 11:29:26 +03:00
comfyanonymous
6320d05696
Slightly lower hunyuan video memory usage.
2025-01-16 00:23:01 -05:00
comfyanonymous
25683b5b02
Lower cosmos diffusion model memory usage.
2025-01-15 23:46:42 -05:00
comfyanonymous
4758fb64b9
Lower cosmos VAE memory usage by a bit.
2025-01-15 22:57:52 -05:00
comfyanonymous
008761166f
Optimize first attention block in cosmos VAE.
2025-01-15 21:48:46 -05:00
patientx
fb0d92b160
Merge branch 'comfyanonymous:master' into master
2025-01-15 23:33:05 +03:00
comfyanonymous
cba58fff0b
Remove unsafe embedding load for very old pytorch.
2025-01-15 04:32:23 -05:00
patientx
3ece3091d4
Merge branch 'comfyanonymous:master' into master
2025-01-15 11:55:19 +03:00
comfyanonymous
2feb8d0b77
Force safe loading of files in torch format on pytorch 2.4+
...
If this breaks something for you make an issue.
2025-01-15 03:50:27 -05:00
patientx
104e6a9685
Merge branch 'comfyanonymous:master' into master
2025-01-15 03:59:33 +03:00
Pam
c78a45685d
Rewrite res_multistep sampler and implement res_multistep_cfg_pp sampler. ( #6462 )
2025-01-14 18:20:06 -05:00
patientx
4f01f72bed
Update zluda.py
2025-01-14 20:03:55 +03:00
patientx
c4861c74d4
UPDATED ZLUDA PATCHING METHOD
2025-01-14 19:57:22 +03:00
patientx
c3fc894ce2
Add files via upload
2025-01-14 19:54:44 +03:00
patientx
c7ebd121d6
Merge branch 'comfyanonymous:master' into master
2025-01-14 15:50:05 +03:00
comfyanonymous
3aaabb12d4
Implement Cosmos Image/Video to World (Video) diffusion models.
...
Use CosmosImageToVideoLatent to set the input image/video.
2025-01-14 05:14:10 -05:00
patientx
aa2a83ec35
Merge branch 'comfyanonymous:master' into master
2025-01-13 14:46:19 +03:00
comfyanonymous
1f1c7b7b56
Remove useless code.
2025-01-13 03:52:37 -05:00
patientx
b03621d13b
Merge branch 'comfyanonymous:master' into master
2025-01-12 14:31:19 +03:00
comfyanonymous
90f349f93d
Add res_multistep sampler from the cosmos code.
...
This sampler should work with all models.
2025-01-12 03:10:07 -05:00
patientx
d319705d78
Merge branch 'comfyanonymous:master' into master
2025-01-12 00:39:02 +03:00
Jedrzej Kosinski
6c9bd11fa3
Hooks Part 2 - TransformerOptionsHook and AdditionalModelsHook ( #6377 )
...
* Add 'sigmas' to transformer_options so that downstream code can know about the full scope of current sampling run, fix Hook Keyframes' guarantee_steps=1 inconsistent behavior with sampling split across different Sampling nodes/sampling runs by referencing 'sigmas'
* Cleaned up hooks.py, refactored Hook.should_register and add_hook_patches to use target_dict instead of target so that more information can be provided about the current execution environment if needed
* Refactor WrapperHook into TransformerOptionsHook, as there is no need to separate out Wrappers/Callbacks/Patches into different hook types (all affect transformer_options)
* Refactored HookGroup to also store a dictionary of hooks separated by hook_type, modified necessary code to no longer need to manually separate out hooks by hook_type
* In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_options to not conflict with the "sigmas" that will overwrite "sigmas" in _calc_cond_batch
* Refactored 'registered' to be HookGroup instead of a list of Hooks, made AddModelsHook operational and compliant with should_register result, moved TransformerOptionsHook handling out of ModelPatcher.register_all_hook_patches, support patches in TransformerOptionsHook properly by casting any patches/wrappers/hooks to proper device at sample time
* Made hook clone code sane, made clear ObjectPatchHook and SetInjectionsHook are not yet operational
* Fix performance of hooks when hooks are appended via Cond Pair Set Props nodes by properly caching between positive and negative conds, make hook_patches_backup behave as intended (in the case that something pre-registers WeightHooks on the ModelPatcher instead of registering it at sample time)
* Filter only registered hooks on self.conds in CFGGuider.sample
* Make hook_scope functional for TransformerOptionsHook
* removed 4 whitespace lines to satisfy Ruff,
* Add a get_injections function to ModelPatcher
* Made TransformerOptionsHook contribute to registered hooks properly, added some doc strings and removed a so-far unused variable
* Rename AddModelsHooks to AdditionalModelsHook, rename SetInjectionsHook to InjectionsHook (not yet implemented, but at least getting the naming figured out)
* Clean up a typehint
2025-01-11 12:20:23 -05:00
patientx
8ac79a7563
Merge branch 'comfyanonymous:master' into master
2025-01-11 15:04:10 +03:00
comfyanonymous
ee8a7ab69d
Fast latent preview for Cosmos.
2025-01-11 04:41:24 -05:00
patientx
7d45042e2e
Merge branch 'comfyanonymous:master' into master
2025-01-10 18:31:48 +03:00
comfyanonymous
2ff3104f70
WIP support for Nvidia Cosmos 7B and 14B text to world (video) models.
2025-01-10 09:14:16 -05:00
patientx
00cf1206e8
Merge branch 'comfyanonymous:master' into master
2025-01-10 15:44:49 +03:00
comfyanonymous
129d8908f7
Add argument to skip the output reshaping in the attention functions.
2025-01-10 06:27:37 -05:00
patientx
c37b5ccf29
Merge branch 'comfyanonymous:master' into master
2025-01-09 18:15:43 +03:00
comfyanonymous
ff838657fa
Cleaner handling of attention mask in ltxv model code.
2025-01-09 07:12:03 -05:00
patientx
0b933e1634
Merge branch 'comfyanonymous:master' into master
2025-01-09 12:57:31 +03:00
comfyanonymous
2307ff6746
Improve some logging messages.
2025-01-08 19:05:22 -05:00
patientx
867659b035
Merge branch 'comfyanonymous:master' into master
2025-01-08 01:49:22 +03:00
comfyanonymous
d0f3752e33
Properly calculate inner dim for t5 model.
...
This is required to support some different types of t5 models.
2025-01-07 17:33:03 -05:00
patientx
e1aa83d068
Merge branch 'comfyanonymous:master' into master
2025-01-07 13:57:13 +03:00
comfyanonymous
4209edf48d
Make a few more samplers deterministic.
2025-01-07 02:12:32 -05:00
patientx
58cfa6b7f3
Merge branch 'comfyanonymous:master' into master
2025-01-07 09:25:41 +03:00
Chenlei Hu
d055325783
Document get_attr and get_model_object ( #6357 )
...
* Document get_attr and get_model_object
* Update model_patcher.py
* Update model_patcher.py
* Update model_patcher.py
2025-01-06 20:12:22 -05:00
patientx
6c1e37c091
Merge branch 'comfyanonymous:master' into master
2025-01-06 14:50:48 +03:00
comfyanonymous
916d1e14a9
Make ancestral samplers more deterministic.
2025-01-06 03:04:32 -05:00
Jedrzej Kosinski
c496e53519
In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_options to not conflict with the "sigmas" that will overwrite "sigmas" in _calc_cond_batch ( #6360 )
2025-01-06 01:36:47 -05:00
patientx
a5a09e45dd
Merge branch 'comfyanonymous:master' into master
2025-01-04 15:42:12 +03:00
comfyanonymous
d45ebb63f6
Remove old unused function.
2025-01-04 07:20:54 -05:00
patientx
c523c36aef
Merge branch 'comfyanonymous:master' into master
2025-01-02 19:55:49 +03:00
comfyanonymous
9e9c8a1c64
Clear cache as often on AMD as Nvidia.
...
I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
patientx
bfc4fb0efb
Merge branch 'comfyanonymous:master' into master
2025-01-02 00:37:45 +03:00
Andrew Kvochko
0f11d60afb
Fix temporal tiling for decoder, remove redundant tiles. ( #6306 )
...
This commit fixes the temporal tile size calculation, and removes
a redundant tile at the end of the range when its elements are
completely covered by the previous tile.
Co-authored-by: Andrew Kvochko <a.kvochko@lightricks.com>
2025-01-01 16:29:01 -05:00
patientx
0dbf8238af
Merge branch 'comfyanonymous:master' into master
2025-01-01 15:55:00 +03:00
comfyanonymous
79eea51a1d
Fix and enforce all ruff W rules.
2025-01-01 03:08:33 -05:00
patientx
5c8d73f4b4
Merge branch 'comfyanonymous:master' into master
2025-01-01 02:14:54 +03:00
blepping
c0338a46a4
Fix unknown sampler error handling in calculate_sigmas function ( #6280 )
...
Modernize calculate_sigmas function
2024-12-31 17:33:50 -05:00
patientx
5e9aa9bb9e
Merge branch 'comfyanonymous:master' into master
2024-12-31 23:34:57 +03:00
Jedrzej Kosinski
1c99734e5a
Add missing model_options param ( #6296 )
2024-12-31 14:46:55 -05:00
patientx
419df8d958
Merge branch 'comfyanonymous:master' into master
2024-12-31 12:22:04 +03:00
filtered
67758f50f3
Fix custom node type-hinting examples ( #6281 )
...
* Fix import in comfy_types doc / sample
* Clarify docstring
2024-12-31 03:41:09 -05:00
comfyanonymous
b7572b2f87
Fix and enforce no trailing whitespace.
2024-12-31 03:16:37 -05:00
patientx
cbcd4aa616
Merge branch 'comfyanonymous:master' into master
2024-12-30 15:02:02 +03:00
blepping
a90aafafc1
Add kl_optimal scheduler ( #6206 )
...
* Add kl_optimal scheduler
* Rename kl_optimal_schedule to kl_optimal_scheduler to be more consistent
2024-12-30 05:09:38 -05:00
patientx
349ddc0b92
Merge branch 'comfyanonymous:master' into master
2024-12-30 12:21:28 +03:00
comfyanonymous
d9b7cfac7e
Fix and enforce new lines at the end of files.
2024-12-30 04:14:59 -05:00
patientx
cd869622df
Merge branch 'comfyanonymous:master' into master
2024-12-30 11:54:26 +03:00
Jedrzej Kosinski
3507870535
Add 'sigmas' to transformer_options so that downstream code can know about the full scope of current sampling run, fix Hook Keyframes' guarantee_steps=1 inconsistent behavior with sampling split across different Sampling nodes/sampling runs by referencing 'sigmas' ( #6273 )
2024-12-30 03:42:49 -05:00
patientx
cd502aa403
Merge branch 'comfyanonymous:master' into master
2024-12-29 13:09:24 +03:00
comfyanonymous
a618f768e0
Auto reshape 2d to 3d latent for single image generation on video model.
2024-12-29 02:26:49 -05:00
patientx
9f71405928
Merge branch 'comfyanonymous:master' into master
2024-12-28 14:38:54 +03:00
comfyanonymous
b504bd606d
Add ruff rule for empty line with trailing whitespace.
2024-12-28 05:23:08 -05:00
patientx
b7c91dd68c
Merge branch 'comfyanonymous:master' into master
2024-12-28 12:02:44 +03:00
comfyanonymous
d170292594
Remove some trailing white space.
2024-12-27 18:02:30 -05:00
filtered
9cfd185676
Add option to log non-error output to stdout ( #6243 )
...
* nit
* Add option to log non-error output to stdout
- No change to default behaviour
- Adds CLI argument: --log-stdout
- With this arg present, any logging of a level below logging.ERROR will be sent to stdout instead of stderr
2024-12-27 14:40:05 -05:00
patientx
b2a9683d75
Merge branch 'comfyanonymous:master' into master
2024-12-27 15:38:26 +03:00
comfyanonymous
4b5bcd8ac4
Closer memory estimation for hunyuan dit model.
2024-12-27 07:37:00 -05:00
comfyanonymous
ceb50b2cbf
Closer memory estimation for pixart models.
2024-12-27 07:30:09 -05:00
patientx
4590f75633
Merge branch 'comfyanonymous:master' into master
2024-12-27 09:59:51 +03:00
comfyanonymous
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00
Huazhong Ji
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com>
Co-authored-by: starmountain1997 <guozr1997@hotmail.com>
Co-authored-by: Ginray <ginray0215@gmail.com>
2024-12-26 19:36:50 -05:00
patientx
1f9acbbfca
Merge branch 'comfyanonymous:master' into master
2024-12-26 23:22:48 +03:00
comfyanonymous
ee9547ba31
Improve temporal VAE Encode (Tiled) math.
2024-12-26 07:18:49 -05:00
patientx
49fa16cc7a
Merge branch 'comfyanonymous:master' into master
2024-12-25 14:05:18 +03:00
comfyanonymous
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
comfyanonymous
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
comfyanonymous
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
comfyanonymous
99a1fb6027
Make fast fp8 take a bit less peak memory.
2024-12-24 18:05:19 -05:00
patientx
077ebf7b17
Merge branch 'comfyanonymous:master' into master
2024-12-24 16:53:56 +03:00
comfyanonymous
73e04987f7
Prevent black images in VAE Decode (Tiled) node.
...
Overlap should be minimum 1 with tiling 2 for tiled temporal VAE decoding.
2024-12-24 07:36:30 -05:00
comfyanonymous
5388df784a
Add temporal tiling to VAE Encode (Tiled) node.
2024-12-24 07:10:09 -05:00
patientx
0f7b4f063d
Merge branch 'comfyanonymous:master' into master
2024-12-24 15:06:02 +03:00
comfyanonymous
bc6dac4327
Add temporal tiling to VAE Decode (Tiled) node.
...
You can now do tiled VAE decoding on the temporal direction for videos.
2024-12-23 20:03:37 -05:00
patientx
00afa8b34f
Merge branch 'comfyanonymous:master' into master
2024-12-23 11:36:49 +03:00
comfyanonymous
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
Simon Lui
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
patientx
403a081215
Merge branch 'comfyanonymous:master' into master
2024-12-23 10:33:06 +03:00
comfyanonymous
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
comfyanonymous
56bc64f351
Comment out some useless code.
2024-12-22 23:51:14 -05:00
zhangp365
f7d83b72e0
fixed a bug in ldm/pixart/blocks.py ( #6158 )
2024-12-22 23:44:20 -05:00
comfyanonymous
80f07952d2
Fix lowvram issue with ltxv vae.
2024-12-22 23:20:17 -05:00
patientx
757335d901
Update supported_models.py
2024-12-23 02:54:49 +03:00
patientx
713eca2176
Update supported_models.py
2024-12-23 02:32:50 +03:00
patientx
e9d8cad2f0
Merge branch 'comfyanonymous:master' into master
2024-12-22 16:56:29 +03:00
comfyanonymous
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
patientx
88ae56dcf9
Merge branch 'comfyanonymous:master' into master
2024-12-21 15:52:28 +03:00
comfyanonymous
da13b6b827
Get rid of meshgrid warning.
2024-12-20 18:02:12 -05:00
comfyanonymous
c86cd58573
Remove useless code.
2024-12-20 17:50:03 -05:00
comfyanonymous
b5fe39211a
Remove some useless code.
2024-12-20 17:43:50 -05:00
patientx
4d64ade41f
Merge branch 'comfyanonymous:master' into master
2024-12-21 01:30:32 +03:00
comfyanonymous
e946667216
Some fixes/cleanups to pixart code.
...
Commented out the masking related code because it is never used in this
implementation.
2024-12-20 17:10:52 -05:00
patientx
37fc9a3ff2
Merge branch 'comfyanonymous:master' into master
2024-12-21 00:37:40 +03:00
Chenlei Hu
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
City
bddb02660c
Add PixArt model support ( #6055 )
...
* PixArt initial version
* PixArt Diffusers convert logic
* pos_emb and interpolation logic
* Reduce duplicate code
* Formatting
* Use optimized attention
* Edit empty token logic
* Basic PixArt LoRA support
* Fix aspect ratio logic
* PixArtAlpha text encode with conds
* Use same detection key logic for PixArt diffusers
2024-12-20 15:25:00 -05:00
patientx
07ea41ecc1
Merge branch 'comfyanonymous:master' into master
2024-12-20 13:06:18 +03:00
comfyanonymous
418eb7062d
Support new LTXV VAE.
2024-12-20 04:38:29 -05:00
patientx
ebf13dfe56
Merge branch 'comfyanonymous:master' into master
2024-12-20 10:05:56 +03:00
comfyanonymous
cac68ca813
Fix some more video tiled encode issues.
...
The downscale_ratio formula for the temporal had issues with some frame
numbers.
2024-12-19 23:14:03 -05:00
comfyanonymous
52c1d933b2
Fix tiled hunyuan video VAE encode issue.
...
Some shapes like 1024x1024 with tile_size 256 and overlap 64 had issues.
2024-12-19 22:55:15 -05:00
comfyanonymous
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
comfyanonymous
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
patientx
778005af3d
Merge branch 'comfyanonymous:master' into master
2024-12-19 14:51:33 +03:00
comfyanonymous
c441048a4f
Make VAE Encode tiled node work with video VAE.
2024-12-19 05:31:39 -05:00
comfyanonymous
9f4b181ab3
Add fast previews for hunyuan video.
2024-12-18 18:24:23 -05:00
comfyanonymous
cbbf077593
Small optimizations.
2024-12-18 18:23:28 -05:00
patientx
43a0204b07
Merge branch 'comfyanonymous:master' into master
2024-12-18 15:17:15 +03:00
comfyanonymous
ff2ff02168
Support old diffusion-pipe hunyuan video loras.
2024-12-18 06:23:54 -05:00
patientx
947aba46c3
Merge branch 'comfyanonymous:master' into master
2024-12-18 12:33:32 +03:00
comfyanonymous
4c5c4ddeda
Fix regression in VAE code on old pytorch versions.
2024-12-18 03:08:28 -05:00
patientx
c062723ca5
Merge branch 'comfyanonymous:master' into master
2024-12-18 10:16:43 +03:00
comfyanonymous
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
comfyanonymous
a4f59bc65e
Pick attention implementation based on device in llama code.
2024-12-18 01:30:20 -05:00
patientx
0e5fa013b2
Merge branch 'comfyanonymous:master' into master
2024-12-18 00:43:24 +03:00
comfyanonymous
ca457f7ba1
Properly tokenize the template for hunyuan video.
2024-12-17 16:22:02 -05:00
comfyanonymous
cd6f615038
Fix tiled vae not working with some shapes.
2024-12-17 16:22:02 -05:00
patientx
4ace4e9ecb
Merge branch 'comfyanonymous:master' into master
2024-12-17 19:55:56 +03:00
comfyanonymous
e4e1bff605
Support diffusion-pipe hunyuan video lora format.
2024-12-17 07:14:21 -05:00
patientx
dc574cdc47
Merge branch 'comfyanonymous:master' into master
2024-12-17 13:57:30 +03:00
comfyanonymous
d6656b0c0c
Support llama hunyuan video text encoder in scaled fp8 format.
2024-12-17 04:19:22 -05:00
comfyanonymous
f4cdedea62
Fix regression with ltxv VAE.
2024-12-17 02:17:31 -05:00
comfyanonymous
39b1fc4ccc
Adjust used dtypes for hunyuan video VAE and diffusion model.
2024-12-16 23:31:10 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
comfyanonymous
19ee5d9d8b
Don't expand mask when not necessary.
...
Expanding seems to slow down inference.
2024-12-16 18:22:50 -05:00
Raphael Walker
61b50720d0
Add support for attention masking in Flux ( #5942 )
...
* fix attention OOM in xformers
* allow passing attention mask in flux attention
* allow an attn_mask in flux
* attn masks can be done using replace patches instead of a separate dict
* fix return types
* fix return order
* enumerate
* patch the right keys
* arg names
* fix a silly bug
* fix xformers masks
* replace match with if, elif, else
* mask with image_ref_size
* remove unused import
* remove unused import 2
* fix pytorch/xformers attention
This corrects a weird inconsistency with skip_reshape.
It also allows masks of various shapes to be passed, which will be
automtically expanded (in a memory-efficient way) to a size that is
compatible with xformers or pytorch sdpa respectively.
* fix mask shapes
2024-12-16 18:21:17 -05:00
patientx
9704e3e617
Merge branch 'comfyanonymous:master' into master
2024-12-14 14:24:39 +03:00
comfyanonymous
e83063bf24
Support conv3d in PatchEmbed.
2024-12-14 05:46:04 -05:00
patientx
fd2eeb5e30
Merge branch 'comfyanonymous:master' into master
2024-12-13 16:09:33 +03:00
comfyanonymous
4e14032c02
Make pad_to_patch_size function work on multi dim.
2024-12-13 07:22:05 -05:00
patientx
3218ed8559
Merge branch 'comfyanonymous:master' into master
2024-12-13 11:21:47 +03:00
Chenlei Hu
563291ee51
Enforce all pyflake lint rules ( #6033 )
...
* Enforce F821 undefined-name
* Enforce all pyflake lint rules
2024-12-12 19:29:37 -05:00
Chenlei Hu
2cddbf0821
Lint and fix undefined names (1/N) ( #6028 )
2024-12-12 18:55:26 -05:00
Chenlei Hu
60749f345d
Lint and fix undefined names (3/N) ( #6030 )
2024-12-12 18:49:40 -05:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
patientx
5d059779d3
Merge branch 'comfyanonymous:master' into master
2024-12-12 15:26:42 +03:00
comfyanonymous
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
patientx
0759f96414
Merge branch 'comfyanonymous:master' into master
2024-12-11 23:05:47 +03:00
comfyanonymous
7a7efe8424
Support loading some checkpoint files with nested dicts.
2024-12-11 08:04:54 -05:00
patientx
3b1247c8af
Merge branch 'comfyanonymous:master' into master
2024-12-11 12:14:46 +03:00
comfyanonymous
44db978531
Fix a few things in text enc code for models with no eos token.
2024-12-10 23:07:26 -05:00
patientx
3254013ec2
Merge branch 'comfyanonymous:master' into master
2024-12-11 00:42:55 +03:00
comfyanonymous
1c8d11e48a
Support different types of tokenizers.
...
Support tokenizers without an eos token.
Pass full sentences to tokenizer for more efficient tokenizing.
2024-12-10 15:03:39 -05:00
patientx
be4b0b5515
Merge branch 'comfyanonymous:master' into master
2024-12-10 15:06:50 +03:00
catboxanon
23827ca312
Add cond_scale to sampler_post_cfg_function ( #5985 )
2024-12-09 20:13:18 -05:00
patientx
7788049e2e
Merge branch 'comfyanonymous:master' into master
2024-12-10 01:10:54 +03:00
Chenlei Hu
0fd4e6c778
Lint unused import ( #5973 )
...
* Lint unused import
* nit
* Remove unused imports
* revert fix_torch import
* nit
2024-12-09 15:24:39 -05:00
comfyanonymous
e2fafe0686
Make CLIP set last layer node work with t5 models.
2024-12-09 03:57:14 -05:00
Haoming
fbf68c4e52
clamp input ( #5928 )
2024-12-07 14:00:31 -05:00
patientx
025d9ed896
Merge branch 'comfyanonymous:master' into master
2024-12-07 00:34:25 +03:00
comfyanonymous
8af9a91e0c
A few improvements to #5937 .
2024-12-06 05:49:15 -05:00
Michael Kupchick
005d2d3a13
ltxv: add noise to guidance image to ensure generated motion. ( #5937 )
2024-12-06 05:46:08 -05:00
comfyanonymous
1e21f4c14e
Make timestep ranges more usable on rectified flow models.
...
This breaks some old workflows but should make the nodes actually useful.
2024-12-05 16:40:58 -05:00
patientx
d35238f60a
Merge branch 'comfyanonymous:master' into master
2024-12-04 23:52:16 +03:00
Chenlei Hu
48272448ad
[Developer Experience] Add node typing ( #5676 )
...
* [Developer Experience] Add node typing
* Shim StrEnum
* nit
* nit
* nit
2024-12-04 15:01:00 -05:00
patientx
743d281f78
Merge branch 'comfyanonymous:master' into master
2024-12-03 23:53:57 +03:00
comfyanonymous
452179fe4f
Make ModelPatcher class clone function work with inheritance.
2024-12-03 13:57:57 -05:00
patientx
b826d3e8c2
Merge branch 'comfyanonymous:master' into master
2024-12-03 14:51:59 +03:00
comfyanonymous
c1b92b719d
Some optimizations to euler a.
2024-12-03 06:11:52 -05:00
comfyanonymous
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00