Commit Graph

1753 Commits

Author SHA1 Message Date
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master 2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
patientx
87e937ecd6
Merge branch 'comfyanonymous:master' into master 2025-03-23 18:32:46 +03:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
patientx
64008960e9
Update zluda.py 2025-03-22 13:53:47 +03:00
patientx
a6db9cc07a
Merge branch 'comfyanonymous:master' into master 2025-03-22 13:52:43 +03:00
comfyanonymous
d9fa9d307f Automatically set the right sampling type for lotus. 2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation (#7125)
* draft pass at a native comfy implementation of Lotus-D depth and normal est

* fix model_sampling kludges

* fix ruff

---------

Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
patientx
cf9ad7aae3
Update zluda.py 2025-03-21 12:21:23 +03:00
patientx
28a4de830c
Merge branch 'comfyanonymous:master' into master 2025-03-20 14:23:30 +03:00
comfyanonymous
3872b43d4b A few fixes for the hunyuan3d models. 2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7 Fix orientation of hunyuan 3d model. 2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab Initial Hunyuan3Dv2 implementation.
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
patientx
4115126a82
Update zluda.py 2025-03-19 12:15:38 +03:00
patientx
34b87d4542
Merge branch 'comfyanonymous:master' into master 2025-03-18 17:14:19 +03:00
comfyanonymous
3b19fc76e3 Allow disabling pe in flux code for some other models. 2025-03-18 05:09:25 -04:00
patientx
c853489349
Merge branch 'comfyanonymous:master' into master 2025-03-18 01:12:05 +03:00
comfyanonymous
50614f1b79 Fix regression with clip vision. 2025-03-17 13:56:11 -04:00
patientx
97dd7b8b52
Merge branch 'comfyanonymous:master' into master 2025-03-17 13:10:34 +03:00
comfyanonymous
6dc7b0bfe3 Add support for giant dinov2 image encoder. 2025-03-17 05:53:54 -04:00
patientx
30e3177e00
Merge branch 'comfyanonymous:master' into master 2025-03-16 21:26:31 +03:00
comfyanonymous
e8e990d6b8 Cleanup code. 2025-03-16 06:29:12 -04:00
patientx
dcc409faa4
Merge branch 'comfyanonymous:master' into master 2025-03-16 13:05:56 +03:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload (#7253)
* Call unpatch_hooks at the start of ModelPatcher.partially_unload

* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00
chaObserv
fd5297131f
Guard the edge cases of noise term in er_sde (#7265) 2025-03-16 06:02:25 -04:00
patientx
bc664959a4
Merge branch 'comfyanonymous:master' into master 2025-03-15 17:11:51 +03:00
comfyanonymous
55a1b09ddc Allow loading diffusion model files with the "Load Checkpoint" node. 2025-03-15 08:27:49 -04:00
comfyanonymous
3c3988df45 Show a better error message if the VAE is invalid. 2025-03-15 08:26:36 -04:00
comfyanonymous
a2448fc527 Remove useless code. 2025-03-14 18:10:37 -04:00
patientx
977dbafcf6
update the comfyui frontend package version 2025-03-15 00:07:04 +03:00
patientx
ccd9fe7f3c
Merge branch 'comfyanonymous:master' into master 2025-03-14 23:47:51 +03:00
comfyanonymous
6a0daa79b6 Make the SkipLayerGuidanceDIT node work on WAN. 2025-03-14 10:55:19 -04:00
FeepingCreature
9c98c6358b
Tolerate missing @torch.library.custom_op (#7234)
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
patientx
bcea9b9a0c
Update attention.py to keep older torch versions running 2025-03-14 16:45:32 +03:00
patientx
eaf40b802d
Merge branch 'comfyanonymous:master' into master 2025-03-14 12:00:25 +03:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93 Fix. 2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed Print mac version. 2025-03-13 10:05:40 -04:00
patientx
b57a624e7c
Merge branch 'comfyanonymous:master' into master 2025-03-13 03:13:15 +03:00
Chenlei Hu
9b6cd9b874
[NodeDef] Add documentation on multi_select input option (#7212) 2025-03-12 17:29:39 -04:00
chaObserv
3fc688aebd
Ensure the extra_args in dpmpp sde series (#7204) 2025-03-12 17:28:59 -04:00
patientx
4a632a54a4
Merge branch 'comfyanonymous:master' into master 2025-03-12 11:28:56 +03:00
chaObserv
01015bff16
Add er_sde sampler (#7187) 2025-03-12 02:42:37 -04:00
patientx
cf490d92b3
Merge branch 'comfyanonymous:master' into master 2025-03-11 01:28:12 +03:00
comfyanonymous
ca8efab79f Support control loras on Wan. 2025-03-10 17:23:13 -04:00
patientx
c469113159
Merge branch 'comfyanonymous:master' into master 2025-03-09 14:09:50 +03:00
comfyanonymous
9aac21f894 Fix issues with new hunyuan img2vid model and bumb version to v0.3.26 2025-03-09 05:07:22 -04:00
Jedrzej Kosinski
528d1b3563
When cached_hook_patches contain weights for hooks, only use hook_backup for unused keys (#7067) 2025-03-09 04:26:31 -04:00
comfyanonymous
7395b0c0d1 Support new hunyuan video i2v model.
Use the new "v2 (replace)" guidance type in HunyuanImageToVideo and set
image_interleave to 4 on the "Text Encode Hunyuan Video" node.
2025-03-08 20:34:47 -05:00
comfyanonymous
0952569493 Fix stable cascade VAE on some lowvram machines. 2025-03-08 20:24:04 -05:00