Commit Graph

3583 Commits

Author SHA1 Message Date
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master 2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080 Disable pytorch attention in VAE for AMD. 2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py 2025-02-14 12:33:27 +03:00
patientx
97146c99c6
Merge branch 'comfyanonymous:master' into master 2025-02-14 12:30:29 +03:00
patientx
99d2824d5a
Update model_management.py 2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
4d66aa9709
Merge branch 'comfyanonymous:master' into master 2025-02-14 11:00:12 +03:00
Robin Huang
042a905c37
Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807)
* Using utf-8 encoding for yaml files.

* Fix test assertion.
2025-02-13 20:39:04 -05:00
comfyanonymous
019c7029ea Add a way to set a different compute dtype for the model at runtime.
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention 2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master 2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d Better memory estimation for ROCm that support mem efficient attention.
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master 2025-02-12 15:50:23 +03:00
comfyanonymous
1d5d6586f3 Fix ruff. 2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err (#6794) 2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b Add add_weight_wrapper function to model patcher.
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c Cleanup. 2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML (#6542)
* Fix for running via DirectML

Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs

* fix formating

* update casual mask calculation
2025-02-11 17:11:32 -05:00
patientx
196fc385e1
Merge branch 'comfyanonymous:master' into master 2025-02-11 16:38:17 +03:00
comfyanonymous
af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. 2025-02-11 08:33:09 -05:00
bananasss00
e57d2282d1
Fix incorrect Content-Type for WebP images (#6752) 2025-02-11 04:48:35 -05:00
patientx
2a0bc66fed
Merge branch 'comfyanonymous:master' into master 2025-02-10 15:41:15 +03:00
comfyanonymous
4027466c80 Make lumina model work with any latent resolution. 2025-02-10 00:24:20 -05:00
patientx
9561b03cc7
Merge branch 'comfyanonymous:master' into master 2025-02-09 15:33:03 +03:00
comfyanonymous
095d867147 Remove useless function. 2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers (#6731) 2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555 Make error more clear to user. 2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast (#6453)
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
patientx
1d39eb81c8
Merge branch 'comfyanonymous:master' into master 2025-02-09 00:35:36 +03:00
comfyanonymous
af93c8d1ee Document which text encoder to use for lumina 2. 2025-02-08 06:57:25 -05:00
patientx
d61d06201a
Merge branch 'comfyanonymous:master' into master 2025-02-08 02:17:20 +03:00
Raphael Walker
832e3f5ca3
Fix another small bug in attention_bias redux (#6737)
* fix a bug in the attn_masked redux code when using weight=1.0

* oh shit wait there was another bug
2025-02-07 14:44:43 -05:00
patientx
d0118796c9
Update install.bat 2025-02-07 12:49:41 +03:00
patientx
dc9f58ca65
Update patchzluda2.bat 2025-02-07 12:49:11 +03:00
patientx
1f5708663a
Update patchzluda.bat 2025-02-07 12:48:02 +03:00
patientx
9a9da027b2
Merge branch 'comfyanonymous:master' into master 2025-02-07 12:02:36 +03:00
comfyanonymous
079eccc92a Don't compress http response by default.
Remove argument to disable it.

Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
patientx
1e9f6dfa8f
Merge branch 'comfyanonymous:master' into master 2025-02-07 01:22:02 +03:00
Raphael Walker
b6951768c4
fix a bug in the attn_masked redux code when using weight=1.0 (#6721) 2025-02-06 16:51:16 -05:00
patientx
1ef4776a96
Merge branch 'comfyanonymous:master' into master 2025-02-06 21:04:11 +03:00
Comfy Org PR Bot
fca304debf
Update frontend to v1.8.14 (#6724)
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-06 10:43:10 -05:00
patientx
4a1e3ee925
Merge branch 'comfyanonymous:master' into master 2025-02-06 14:33:29 +03:00
comfyanonymous
14880e6dba Remove some useless code. 2025-02-06 05:00:37 -05:00
Chenlei Hu
f1059b0b82
Remove unused GET /files API endpoint (#6714) 2025-02-05 18:48:36 -05:00
comfyanonymous
debabccb84 Bump ComfyUI version to v0.3.14 2025-02-05 15:48:13 -05:00
patientx
93c0fc3446
Update supported_models.py 2025-02-05 23:12:38 +03:00
patientx
f8c2ab631a
Merge branch 'comfyanonymous:master' into master 2025-02-05 23:10:21 +03:00
comfyanonymous
37cd448529 Set the shift for Lumina back to 6. 2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301 Upcasting rope to fp32 seems to make no difference in this model. 2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5 Use regular numbers for rope in lumina model. 2025-02-05 04:17:25 -05:00