comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
Robin Huang
042a905c37
Open yaml files with utf-8 encoding for extra_model_paths.yaml ( #6807 )
...
* Using utf-8 encoding for yaml files.
* Fix test assertion.
2025-02-13 20:39:04 -05:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
doctorpangloss
31b6b53236
Quality of life improvements
...
- export_custom_nodes() finds all the classes that inherit from
CustomNode and exports them correctly for custom node discovery to
find
- regular expressions
- additional string formatting and parsing nodes
2025-02-12 14:12:10 -08:00
doctorpangloss
cf08b11132
Colour package added to requirements
2025-02-12 12:48:51 -08:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
bananasss00
e57d2282d1
Fix incorrect Content-Type for WebP images ( #6752 )
2025-02-11 04:48:35 -05:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
comfyanonymous
af93c8d1ee
Document which text encoder to use for lumina 2.
2025-02-08 06:57:25 -05:00
Raphael Walker
832e3f5ca3
Fix another small bug in attention_bias redux ( #6737 )
...
* fix a bug in the attn_masked redux code when using weight=1.0
* oh shit wait there was another bug
2025-02-07 14:44:43 -05:00
doctorpangloss
0a1b118cd4
try to free disk space using better script
2025-02-07 07:23:01 -08:00
doctorpangloss
713c3f06af
Use vanilla container
2025-02-07 07:05:48 -08:00
doctorpangloss
3fa9e98d02
Try to run locally, remove unused workflows, add compose file
2025-02-07 06:53:18 -08:00
comfyanonymous
079eccc92a
Don't compress http response by default.
...
Remove argument to disable it.
Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
Raphael Walker
b6951768c4
fix a bug in the attn_masked redux code when using weight=1.0 ( #6721 )
2025-02-06 16:51:16 -05:00
doctorpangloss
ef74b9fdda
More graceful health check handling of this connection not being ready
2025-02-06 11:08:09 -08:00
doctorpangloss
4c72ef5bac
Try to free more disk space in github actions runner
2025-02-06 10:43:58 -08:00
doctorpangloss
dfa3a36f83
maximize build space
2025-02-06 10:05:43 -08:00
doctorpangloss
7e122202f6
set image name
2025-02-06 09:56:31 -08:00
doctorpangloss
49fcfaedce
Fix github actions docker image workflow
2025-02-06 09:48:41 -08:00
doctorpangloss
f4647a03e7
Add Docker build action
2025-02-06 09:25:56 -08:00
doctorpangloss
f346e3c82e
Optimize Dockerfile
2025-02-06 09:22:02 -08:00
doctorpangloss
5b3eb2e51c
Fix torch.zeroes error
2025-02-06 09:00:10 -08:00
Comfy Org PR Bot
fca304debf
Update frontend to v1.8.14 ( #6724 )
...
Co-authored-by: huchenlei <20929282+huchenlei@users.noreply.github.com>
2025-02-06 10:43:10 -05:00
comfyanonymous
14880e6dba
Remove some useless code.
2025-02-06 05:00:37 -05:00
Chenlei Hu
f1059b0b82
Remove unused GET /files API endpoint ( #6714 )
2025-02-05 18:48:36 -05:00
doctorpangloss
3f1f427ff4
Distinct Seed and Seed64 input specs. numpy only supports 32 bit seeds
2025-02-05 14:08:09 -08:00
doctorpangloss
6ab1aa1e8a
Improving MLLM/VLLM support and fixing bugs
...
- fix #29 str(model) no longer raises exceptions like with
HyVideoModelLoader
- don't try to format CUDA tensors because that can sometimes raise
exceptions
- cudaAllocAsync has been disabled for now due to 2.6.0 bugs
- improve florence2 support
- add support for paligemma 2. This requires the fix for transformers
that is currently staged in another repo, install with
`uv pip install --no-deps "transformers@git+https://github.com/zucchini-nlp/transformers.git#branch=paligemma-fix-kwargs "`
- triton has been updated
- fix missing __init__.py files
2025-02-05 14:02:28 -08:00
comfyanonymous
debabccb84
Bump ComfyUI version to v0.3.14
2025-02-05 15:48:13 -05:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
comfyanonymous
94f21f9301
Upcasting rope to fp32 seems to make no difference in this model.
2025-02-05 04:32:47 -05:00
comfyanonymous
60653004e5
Use regular numbers for rope in lumina model.
2025-02-05 04:17:25 -05:00
comfyanonymous
a57d635c5f
Fix lumina 2 batches.
2025-02-04 21:48:11 -05:00
doctorpangloss
dcac115f68
Revert "Update logging when models are loaded"
...
This reverts commit 0d15a091c2 .
2025-02-04 15:18:00 -08:00
doctorpangloss
80db9a8e25
Florence2
2025-02-04 15:17:14 -08:00
doctorpangloss
ce3583ad42
relax numpy requirements
2025-02-04 09:40:22 -08:00
doctorpangloss
1a24ceef79
Updates for torch 2.6.0, prepare Anthropic nodes, accept multiple logging levels
2025-02-04 09:27:18 -08:00
Benjamin Berman
fac670da89
Merge pull request #28 from leszko/patch-2
...
Update logging when models are loaded
2025-02-04 08:28:30 -08:00
Rafał Leszko
0d15a091c2
Update logging when models are loaded
...
The "Loaded " log was logged even if no model were actually loaded into VRAM
2025-02-04 14:44:12 +01:00