Commit Graph

4485 Commits

Author SHA1 Message Date
comfyanonymous
17027f2a6a
Add a way to disable the final norm in the llama based TE models. (#10794) 2025-11-18 22:36:03 -05:00
comfyanonymous
b5c8be8b1d ComfyUI 0.3.70 2025-11-18 19:37:20 -05:00
Alexander Piskun
24fdb92edf
feat(api-nodes): add new Gemini model (#10789) 2025-11-18 14:26:44 -08:00
comfyanonymous
d526974576
Fix hunyuan 3d 2.0 (#10792) 2025-11-18 16:46:19 -05:00
Jukka Seppänen
e1ab6bb394
EasyCache: Fix for mismatch in input/output channels with some models (#10788)
Slices model input with output channels so the caching tracks only the noise channels, resolves channel mismatch with models like WanVideo I2V

Also fix for slicing deprecation in pytorch 2.9
2025-11-18 07:00:21 -08:00
Alexander Piskun
048f49adbd
chore(api-nodes): adjusted PR template; set min python version for pylint to 3.10 (#10787) 2025-11-18 03:59:27 -08:00
comfyanonymous
47bfd5a33f
Native block swap custom nodes considered harmful. (#10783) 2025-11-18 00:26:44 -05:00
ComfyUI Wiki
fdf49a2861
Fix the portable download link for CUDA 12.6 (#10780) 2025-11-17 22:04:06 -05:00
comfyanonymous
f41e5f398d
Update README with new portable download link (#10778) 2025-11-17 19:59:19 -05:00
comfyanonymous
27cbac865e
Add release workflow for NVIDIA cu126 (#10777) 2025-11-17 19:04:04 -05:00
comfyanonymous
3d0003c24c ComfyUI version 0.3.69 2025-11-17 17:17:24 -05:00
comfyanonymous
7d6103325e
Change ROCm nightly install command to 7.1 (#10764) 2025-11-16 03:01:14 -05:00
Alexander Piskun
2d4a08b717
Revert "chore(api-nodes): mark OpenAIDalle2 and OpenAIDalle3 nodes as deprecated (#10757)" (#10759)
This reverts commit 9a02382568.
2025-11-15 12:37:34 -08:00
Alexander Piskun
9a02382568
chore(api-nodes): mark OpenAIDalle2 and OpenAIDalle3 nodes as deprecated (#10757) 2025-11-15 11:18:49 -08:00
comfyanonymous
bd01d9f7fd
Add left padding support to tokenizers. (#10753) 2025-11-15 06:54:40 -05:00
comfyanonymous
443056c401
Fix custom nodes import error. (#10747)
This should fix the import errors but will break if the custom nodes actually try to use the class.
2025-11-14 03:26:05 -05:00
comfyanonymous
f60923590c
Use same code for chroma and flux blocks so that optimizations are shared. (#10746) 2025-11-14 01:28:05 -05:00
comfyanonymous
1ef328c007
Better instructions for the portable. (#10743) 2025-11-13 21:32:39 -05:00
rattus
94c298f962
flux: reduce VRAM usage (#10737)
Cleanup a bunch of stack tensors on Flux. This take me from B=19 to B=22
for 1600x1600 on RTX5090.
2025-11-13 16:02:03 -08:00
ric-yu
2fde9597f4
feat: add create_time dict to prompt field in /history and /queue (#10741) 2025-11-13 15:11:52 -08:00
Alexander Piskun
f91078b1ff
add PR template for API-Nodes (#10736) 2025-11-13 10:05:26 -08:00
contentis
3b3ef9a77a
Quantized Ops fixes (#10715)
* offload support, bug fixes, remove mixins

* add readme
2025-11-12 18:26:52 -05:00
comfyanonymous
8b0b93df51
Update Python 3.14 compatibility notes in README (#10730) 2025-11-12 17:04:41 -05:00
rattus
1c7eaeca10
qwen: reduce VRAM usage (#10725)
Clean up a bunch of stacked and no-longer-needed tensors on the QWEN
VRAM peak (currently FFN).

With this I go from OOMing at B=37x1328x1328 to being able to
succesfully run B=47 (RTX5090).
2025-11-12 16:20:53 -05:00
rattus
18e7d6dba5
mm/mp: always unload re-used but modified models (#10724)
The partial unloader path in model re-use flow skips straight to the
actual unload without any check of the patching UUID. This means that
if you do an upscale flow with a model patch on an existing model, it
will not apply your patchings.

Fix by delaying the partial_unload until after the uuid checks. This
is done by making partial_unload a model of partial_load where extra_mem
is -ve.
2025-11-12 16:19:53 -05:00
Qiacheng Li
e1d85e7577
Update README.md for Intel Arc GPU installation, remove IPEX (#10729)
IPEX is no longer needed for Intel Arc GPUs.  Removing instruction to setup ipex.
2025-11-12 15:21:05 -05:00
comfyanonymous
1199411747
Don't pin tensor if not a torch.nn.parameter.Parameter (#10718) 2025-11-11 19:33:30 -05:00
John Alva
708ee2de73 Simplify Windows launcher and PR scope:
- Remove non-upstream packages from requirements.txt
- Make dependency check advisory only (no auto-install)
- Remove CUDA PyTorch auto-install/update flows
- Trim banner and keep minimal preflight checks
- Drop non-portable create_shortcut.ps1
2025-11-11 16:18:16 -06:00
John Alva
ef3c64449a Remove planning and PR helper docs from PR; keep copies on branch preinstall-enhancements-docs 2025-11-11 16:09:59 -06:00
John Alva
5b4c2ff924 UX: preflight banner, fast torch detection, weekly full-check, non-blocking prompts, optional updates, auto port selection and browser open 2025-11-11 11:18:02 -06:00
John-Caldwell
0443944dbb Merge branch 'preinstall-enhancements' of https://github.com/John-Caldwell/ComfyUI into preinstall-enhancements 2025-11-11 08:41:15 -06:00
John-Caldwell
d90159f58b Update ASCII art banner to display COMFY 2025-11-11 08:40:53 -06:00
John-Caldwell
5818270ea3 Add screenshots directory and finalize PR for review 2025-11-11 01:35:31 -06:00
John-Caldwell
cfee48d3b7 Update checklist: Feature Request issue #10705 created and referenced 2025-11-10 18:09:47 -06:00
John-Caldwell
a6769cb56e Add issue number #10705 to PR description
Feature Request issue created successfully. PR now references the issue as required by contribution guidelines.
2025-11-10 18:08:34 -06:00
John-Caldwell
04139fe528 Update submission checklist and add final summary
- Updated PR_SUBMISSION_CHECKLIST.md with Feature Request issue creation step
- Added FINAL_SUMMARY.md with complete status and next steps
- All documentation and preparation complete
2025-11-10 18:00:29 -06:00
John-Caldwell
55b7ea2bbd Add Feature Request issue content and update PR description for compliance
- Created FEATURE_REQUEST_ISSUE.md with complete issue content
- Created CREATE_ISSUE_INSTRUCTIONS.md with step-by-step instructions
- Created PR_COMPLIANCE_ANALYSIS.md analyzing compliance with wiki requirements
- Created SEARCH_RESULTS_SUMMARY.md documenting search for existing issues/PRs
- Updated PR_DESCRIPTION.md with:
  - Issue Addressed section (explicitly states problems solved)
  - Potential Concerns and Side Effects section (required by wiki)
  - PR Size Note (acknowledges large PR size)
  - Placeholder for Related Issue number
2025-11-10 17:59:35 -06:00
John-Caldwell
2bfd78c46a Add PR submission checklist and screenshot recommendations 2025-11-10 17:21:41 -06:00
John-Caldwell
1a56b1dcea Add comprehensive PR description for preinstall enhancements 2025-11-10 17:20:54 -06:00
John-Caldwell
52d13ef3a8 Add plan document for preinstall enhancements PR 2025-11-10 16:30:23 -06:00
comfyanonymous
5ebcab3c7d
Update CI workflow to remove dead macOS runner. (#10704)
* Update CI workflow to remove dead macOS runner.

* revert

* revert
2025-11-10 15:35:29 -05:00
John-Caldwell
f65290f9a5 Add create_shortcut.ps1 for desktop shortcut creation 2025-11-10 11:54:36 -06:00
John-Caldwell
1365bbf859 Enhanced run_comfyui.bat with UTF-8 encoding, progress bars, and CUDA PyTorch auto-installation
- Added UTF-8 encoding (chcp 65001) to fix Unicode character display in ASCII art header

- Enabled progress bars for all pip installations (--progress-bar on)

- Fixed CUDA PyTorch auto-installation logic to properly continue to ComfyUI launch

- Updated CUDA availability variables after successful installation

- Fixed misleading Restart message to accurately reflect Continue to launch

- Improved error handling and user feedback throughout the installation process
2025-11-10 11:53:36 -06:00
John-Caldwell
0f93e63be4 Add enhanced batch file with optional dependency checking
Enhanced run_comfyui.bat with:
- Automatic detection of missing critical dependencies
- Virtual environment detection and warnings
- Optional user-prompted installation with clear warnings
- Comprehensive dependency checking for all essential packages

NOTE: Author is not a professional coder and relied heavily on Cursor AI for implementation. Please review code thoroughly before merging.
2025-11-10 10:27:52 -06:00
rattus
c350009236
ops: Put weight cast on the offload stream (#10697)
This needs to be on the offload stream. This reproduced a black screen
with low resolution images on a slow bus when using FP8.
2025-11-09 22:52:11 -05:00
comfyanonymous
dea899f221
Unload weights if vram usage goes up between runs. (#10690) 2025-11-09 18:51:33 -05:00
comfyanonymous
e632e5de28
Add logging for model unloading. (#10692) 2025-11-09 18:06:39 -05:00
comfyanonymous
2abd2b5c20
Make ScaleROPE node work on Flux. (#10686) 2025-11-08 15:52:02 -05:00
comfyanonymous
a1a70362ca
Only unpin tensor if it was pinned by ComfyUI (#10677) 2025-11-07 11:15:05 -05:00
rattus
cf97b033ee
mm: guard against double pin and unpin explicitly (#10672)
As commented, if you let cuda be the one to detect double pin/unpinning
it actually creates an asyc GPU error.
2025-11-06 21:20:48 -05:00