Commit Graph

5528 Commits

Author SHA1 Message Date
patientx
385d102633
Revise README for ComfyUI project
Updated project name and description, added various badges and links, and revised installation instructions for ComfyUI.
2025-11-14 12:55:23 +03:00
patientx
66835171c6
Merge branch 'comfyanonymous:master' into master 2025-11-14 00:02:26 +03:00
Alexander Piskun
f91078b1ff
add PR template for API-Nodes (#10736) 2025-11-13 10:05:26 -08:00
patientx
5083c0225a
Update README.md 2025-11-13 13:23:41 +03:00
patientx
184fdd1103
Merge branch 'comfyanonymous:master' into master 2025-11-13 13:23:21 +03:00
patientx
bdbf42ee05
Update README.md 2025-11-13 13:23:13 +03:00
contentis
3b3ef9a77a
Quantized Ops fixes (#10715)
* offload support, bug fixes, remove mixins

* add readme
2025-11-12 18:26:52 -05:00
comfyanonymous
8b0b93df51
Update Python 3.14 compatibility notes in README (#10730) 2025-11-12 17:04:41 -05:00
patientx
9747b75093
Update README.md 2025-11-13 00:55:34 +03:00
patientx
00609a5102
Merge branch 'comfyanonymous:master' into master 2025-11-13 00:55:19 +03:00
patientx
f1c8217364
Update README.md 2025-11-13 00:55:13 +03:00
rattus
1c7eaeca10
qwen: reduce VRAM usage (#10725)
Clean up a bunch of stacked and no-longer-needed tensors on the QWEN
VRAM peak (currently FFN).

With this I go from OOMing at B=37x1328x1328 to being able to
succesfully run B=47 (RTX5090).
2025-11-12 16:20:53 -05:00
rattus
18e7d6dba5
mm/mp: always unload re-used but modified models (#10724)
The partial unloader path in model re-use flow skips straight to the
actual unload without any check of the patching UUID. This means that
if you do an upscale flow with a model patch on an existing model, it
will not apply your patchings.

Fix by delaying the partial_unload until after the uuid checks. This
is done by making partial_unload a model of partial_load where extra_mem
is -ve.
2025-11-12 16:19:53 -05:00
Qiacheng Li
e1d85e7577
Update README.md for Intel Arc GPU installation, remove IPEX (#10729)
IPEX is no longer needed for Intel Arc GPUs.  Removing instruction to setup ipex.
2025-11-12 15:21:05 -05:00
patientx
644778be49
Merge branch 'comfyanonymous:master' into master 2025-11-12 17:55:30 +03:00
comfyanonymous
1199411747
Don't pin tensor if not a torch.nn.parameter.Parameter (#10718) 2025-11-11 19:33:30 -05:00
patientx
4e14be5d8d
Merge branch 'comfyanonymous:master' into master 2025-11-11 14:02:40 +03:00
comfyanonymous
5ebcab3c7d
Update CI workflow to remove dead macOS runner. (#10704)
* Update CI workflow to remove dead macOS runner.

* revert

* revert
2025-11-10 15:35:29 -05:00
patientx
3662d0a2ce
Merge branch 'comfyanonymous:master' into master 2025-11-10 14:05:53 +03:00
rattus
c350009236
ops: Put weight cast on the offload stream (#10697)
This needs to be on the offload stream. This reproduced a black screen
with low resolution images on a slow bus when using FP8.
2025-11-09 22:52:11 -05:00
comfyanonymous
dea899f221
Unload weights if vram usage goes up between runs. (#10690) 2025-11-09 18:51:33 -05:00
comfyanonymous
e632e5de28
Add logging for model unloading. (#10692) 2025-11-09 18:06:39 -05:00
patientx
d56cb56059
Merge branch 'comfyanonymous:master' into master 2025-11-09 13:41:51 +03:00
comfyanonymous
2abd2b5c20
Make ScaleROPE node work on Flux. (#10686) 2025-11-08 15:52:02 -05:00
patientx
8e02689534
Merge branch 'comfyanonymous:master' into master 2025-11-07 20:30:21 +03:00
comfyanonymous
a1a70362ca
Only unpin tensor if it was pinned by ComfyUI (#10677) 2025-11-07 11:15:05 -05:00
patientx
d29dbbd829
Merge branch 'comfyanonymous:master' into master 2025-11-07 14:27:13 +03:00
rattus
cf97b033ee
mm: guard against double pin and unpin explicitly (#10672)
As commented, if you let cuda be the one to detect double pin/unpinning
it actually creates an asyc GPU error.
2025-11-06 21:20:48 -05:00
comfyanonymous
eb1c42f649
Tell users they need to upload their logs in bug reports. (#10671) 2025-11-06 20:24:28 -05:00
patientx
9d1f7c5eee
Update README.md 2025-11-06 15:35:55 +03:00
patientx
3ab45ae725
Merge branch 'comfyanonymous:master' into master 2025-11-06 15:35:41 +03:00
patientx
db70e60d01
Update README.md 2025-11-06 15:35:33 +03:00
comfyanonymous
e05c907126
Clarify release cycle. (#10667) 2025-11-06 04:11:30 -05:00
comfyanonymous
09dc24c8a9
Pinned mem also seems to work on AMD. (#10658) 2025-11-05 19:11:15 -05:00
comfyanonymous
1d69245981
Enable pinned memory by default on Nvidia. (#10656)
Removed the --fast pinned_memory flag.

You can use --disable-pinned-memory to disable it. Please report if it
causes any issues.
2025-11-05 18:08:13 -05:00
comfyanonymous
97f198e421
Fix qwen controlnet regression. (#10657) 2025-11-05 18:07:35 -05:00
patientx
c25f5d3152
Merge branch 'comfyanonymous:master' into master 2025-11-05 20:08:21 +03:00
Alexander Piskun
bda0eb2448
feat(API-nodes): move Rodin3D nodes to new client; removed old api client.py (#10645) 2025-11-05 02:16:00 -08:00
patientx
84faf45f09
Merge branch 'comfyanonymous:master' into master 2025-11-05 13:07:02 +03:00
comfyanonymous
c4a6b389de
Lower ltxv mem usage to what it was before previous pr. (#10643)
Bring back qwen behavior to what it was before previous pr.
2025-11-04 22:47:35 -05:00
contentis
4cd881866b
Use single apply_rope function across models (#10547) 2025-11-04 20:10:11 -05:00
comfyanonymous
265adad858 ComfyUI version v0.3.68 2025-11-04 19:42:23 -05:00
comfyanonymous
7f3e4d486c
Limit amount of pinned memory on windows to prevent issues. (#10638) 2025-11-04 17:37:50 -05:00
rattus
a389ee01bb
caching: Handle None outputs tuple case (#10637) 2025-11-04 14:14:10 -08:00
ComfyUI Wiki
9c71a66790
chore: update workflow templates to v0.2.11 (#10634) 2025-11-04 10:51:53 -08:00
patientx
11083ab58c
Merge branch 'comfyanonymous:master' into master 2025-11-04 13:09:30 +03:00
comfyanonymous
af4b7b5edb
More fp8 torch.compile regressions fixed. (#10625) 2025-11-03 22:14:20 -05:00
comfyanonymous
0f4ef3afa0
This seems to slow things down slightly on Linux. (#10624) 2025-11-03 21:47:14 -05:00
comfyanonymous
6b88478f9f
Bring back fp8 torch compile performance to what it should be. (#10622) 2025-11-03 19:22:10 -05:00
comfyanonymous
e199c8cc67
Fixes (#10621) 2025-11-03 17:58:24 -05:00