Commit Graph

176 Commits

Author SHA1 Message Date
Sasbom
0ef5557d6a Add QOL feature for changing the custom nodes folder location through cli args.
bugfix: fix typo in apply_directory for custom_nodes_directory

allow for PATH style ';' delimited custom_node directories.

change delimiter type for seperate folders per platform.

feat(API-nodes): move Rodin3D nodes to new client; removed old api client.py (#10645)

Fix qwen controlnet regression. (#10657)

Enable pinned memory by default on Nvidia. (#10656)

Removed the --fast pinned_memory flag.

You can use --disable-pinned-memory to disable it. Please report if it
causes any issues.

Pinned mem also seems to work on AMD. (#10658)

Remove environment variable.

Removed environment variable fallback for custom nodes directory.

Update documentation for custom nodes directory

Clarified documentation on custom nodes directory argument, removed documentation on environment variable

Clarify release cycle. (#10667)

Tell users they need to upload their logs in bug reports. (#10671)

mm: guard against double pin and unpin explicitly (#10672)

As commented, if you let cuda be the one to detect double pin/unpinning
it actually creates an asyc GPU error.

Only unpin tensor if it was pinned by ComfyUI (#10677)

Make ScaleROPE node work on Flux. (#10686)

Add logging for model unloading. (#10692)

Unload weights if vram usage goes up between runs. (#10690)

ops: Put weight cast on the offload stream (#10697)

This needs to be on the offload stream. This reproduced a black screen
with low resolution images on a slow bus when using FP8.

Update CI workflow to remove dead macOS runner. (#10704)

* Update CI workflow to remove dead macOS runner.

* revert

* revert

Don't pin tensor if not a torch.nn.parameter.Parameter (#10718)

Update README.md for Intel Arc GPU installation, remove IPEX (#10729)

IPEX is no longer needed for Intel Arc GPUs.  Removing instruction to setup ipex.

mm/mp: always unload re-used but modified models (#10724)

The partial unloader path in model re-use flow skips straight to the
actual unload without any check of the patching UUID. This means that
if you do an upscale flow with a model patch on an existing model, it
will not apply your patchings.

Fix by delaying the partial_unload until after the uuid checks. This
is done by making partial_unload a model of partial_load where extra_mem
is -ve.

qwen: reduce VRAM usage (#10725)

Clean up a bunch of stacked and no-longer-needed tensors on the QWEN
VRAM peak (currently FFN).

With this I go from OOMing at B=37x1328x1328 to being able to
succesfully run B=47 (RTX5090).

 Update Python 3.14 compatibility notes in README  (#10730)

Quantized Ops fixes (#10715)

* offload support, bug fixes, remove mixins

* add readme

add PR template for API-Nodes (#10736)

feat: add create_time dict to prompt field in /history and /queue (#10741)

flux: reduce VRAM usage (#10737)

Cleanup a bunch of stack tensors on Flux. This take me from B=19 to B=22
for 1600x1600 on RTX5090.

Better instructions for the portable. (#10743)

Use same code for chroma and flux blocks so that optimizations are shared. (#10746)

Fix custom nodes import error. (#10747)

This should fix the import errors but will break if the custom nodes actually try to use the class.

revert import reordering

revert imports pt 2

Add left padding support to tokenizers. (#10753)

chore(api-nodes): mark OpenAIDalle2 and OpenAIDalle3 nodes as deprecated (#10757)

Revert "chore(api-nodes): mark OpenAIDalle2 and OpenAIDalle3 nodes as deprecated (#10757)" (#10759)

This reverts commit 9a02382568.

Change ROCm nightly install command to 7.1 (#10764)
2025-11-17 06:16:21 +01:00
comfyanonymous
3bea4efc6b
Tell users to update nvidia drivers if problem with portable. (#10510) 2025-10-28 04:45:45 -04:00
comfyanonymous
a1864c01f2
Small readme improvement. (#10442) 2025-10-22 17:26:22 -04:00
comfyanonymous
92d97380bd
Update Python 3.14 installation instructions (#10385)
Removed mention of installing pytorch nightly for Python 3.14.
2025-10-17 18:22:59 -04:00
comfyanonymous
6b035bfce2
Latest pytorch stable is cu130 (#10361) 2025-10-15 18:48:12 -04:00
comfyanonymous
84867067ea
Python 3.14 instructions. (#10337) 2025-10-14 02:09:12 -04:00
comfyanonymous
bbd683098e
Add instructions to install nightly AMD pytorch for windows. (#10190)
* Add instructions to install nightly AMD pytorch for windows.

* Update README.md
2025-10-03 23:37:43 -04:00
comfyanonymous
08726b64fe
Update amd nightly command in readme. (#10189) 2025-10-03 18:22:43 -04:00
comfyanonymous
f48d7230de
Add new portable links to readme. (#10112) 2025-09-30 12:17:49 -04:00
comfyanonymous
b60dc31627
Update command to install latest nighly pytorch. (#10085) 2025-09-28 13:41:32 -04:00
comfyanonymous
4f1f26ac6c
Add that hunyuan image is supported to readme. (#9857) 2025-09-14 04:05:38 -04:00
comfyanonymous
f6b93d41a0
Remove models from readme that are not fully implemented. (#9535)
Cosmos model implementations are currently missing the safety part so it is technically not fully implemented and should not be advertised as such.
2025-08-24 15:40:32 -04:00
comfyanonymous
59eddda900
Python 3.13 is well supported. (#9511) 2025-08-23 01:36:44 -04:00
comfyanonymous
e73a9dbe30
Add that qwen edit model is supported to readme. (#9463) 2025-08-20 17:34:13 -04:00
Simon Lui
c991a5da65
Fix XPU iGPU regressions (#9322)
* Change bf16 check and switch non-blocking to off default with option to force to regain speed on certain classes of iGPUs and refactor xpu check.

* Turn non_blocking off by default for xpu.

* Update README.md for Intel GPUs.
2025-08-13 19:13:35 -04:00
comfyanonymous
0552de7c7d
Bump pytorch cuda and rocm versions in readme instructions. (#9273) 2025-08-10 05:03:47 -04:00
comfyanonymous
d8c51ba15a
Add Qwen Image model to readme. (#9191) 2025-08-05 07:41:18 -04:00
comfyanonymous
97b8a2c26a
More accurate explanation of release process. (#9126) 2025-07-31 05:46:23 -04:00
comfyanonymous
9f1388c0a3
Add wan2.2 to readme. (#9081) 2025-07-28 08:01:53 -04:00
comfyanonymous
78672d0ee6
Small readme update. (#9071) 2025-07-27 07:42:58 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX (#8585)
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
comfyanonymous
5249e45a1c
Add hidream e1.1 example to readme. (#8990) 2025-07-21 15:23:41 -04:00
comfyanonymous
5612670ee4
Remove unmaintained notebook. (#8845) 2025-07-09 03:45:48 -04:00
comfyanonymous
27870ec3c3
Add that ckpt files are loaded safely to README. (#8791) 2025-07-04 04:49:11 -04:00
comfyanonymous
e9af97ba1a
Use torch cu129 for nvidia pytorch nightly. (#8786)
* update nightly workflow with cu129

* Remove unused file to lower standalone size.
2025-07-03 14:39:11 -04:00
comfyanonymous
bd951a714f
Add Flux Kontext and Omnigen 2 models to readme. (#8682) 2025-06-26 12:26:29 -04:00
comfyanonymous
dd94416db2
Indicate that directml is not recommended in the README. (#8644) 2025-06-23 14:04:49 -04:00
comfyanonymous
4459a17e82
Add Cosmos Predict2 to README. (#8562) 2025-06-17 05:18:01 -04:00
Olexandr88
d8759c772b
Update README.md (#8427) 2025-06-05 10:44:29 -07:00
comfyanonymous
310f4b6ef8
Add api nodes to readme. (#8402) 2025-06-03 04:26:44 -04:00
filtered
4f3b50ba51
Update README ROCm text to match link (#8199)
- Follow-up on #8198
2025-05-19 16:40:55 -04:00
comfyanonymous
e930a387d6
Update AMD instructions in README. (#8198) 2025-05-19 04:58:41 -04:00
filtered
7046983d95
Remove Desktop versioning claim from README (#8155) 2025-05-16 10:45:36 -07:00
comfyanonymous
08368f8e00
Update comment on ROCm pytorch attention in README. (#8123) 2025-05-14 17:54:50 -04:00
comfyanonymous
924d771e18
Add ACE Step to README. (#8005) 2025-05-08 08:40:57 -04:00
Chenlei Hu
45503f6499
Add release process section to README (#7855)
* Add release process section to README

* move

* Update README.md
2025-04-29 06:32:34 -04:00
comfyanonymous
005a91ce2b
Latest desktop and portable should work on blackwell. (#7861)
Removed the mention about the cards from the readme.
2025-04-29 06:29:38 -04:00
comfyanonymous
21a11ef817
Pytorch stable 2.7 is out and support cu128 (#7749) 2025-04-23 05:12:59 -04:00
comfyanonymous
880c205df1 Add hidream to readme. 2025-04-17 16:58:27 -04:00
comfyanonymous
eade1551bb Add Hunyuan3D to readme. 2025-03-24 07:14:32 -04:00
comfyanonymous
5dbd250965 Update nightly instructions in readme. 2025-03-07 07:57:59 -05:00
comfyanonymous
714f728820 Add to README that the Wan model is supported. 2025-02-26 20:48:50 -05:00
comfyanonymous
92d8d15300 Readme changes.
Instructions shouldn't recommend to run comfyui with --listen
2025-02-26 20:47:08 -05:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
Yoland Yan
189da3726d
Update README.md (#6960) 2025-02-25 17:17:18 -08:00
Robin Huang
4553891bbd
Update installation documentation to include desktop + cli. (#6899)
* Update installation documentation.

* Add portable to description.

* Move cli further down.
2025-02-23 19:13:39 -05:00
filtered
f579a740dd
Update frontend release schedule in README. (#6908)
Changes release schedule from weekly to fortnightly.
2025-02-21 05:58:12 -05:00
Robin Huang
d37272532c
Add discord channel to support section. (#6900) 2025-02-20 18:26:16 -05:00
comfyanonymous
016b219dcc Add Lumina Image 2.0 to Readme. 2025-02-04 08:08:36 -05:00
comfyanonymous
541dc08547 Update Readme. 2025-01-31 08:35:48 -05:00