Commit Graph

5078 Commits

Author SHA1 Message Date
doctorpangloss
4c67f75d36 Fix pylint issues 2025-10-24 10:12:20 -07:00
doctorpangloss
607fcf7321 Fix trim 2025-10-23 19:49:59 -07:00
doctorpangloss
058e5dc634 Fixes to tests and configuration, making library use more durable 2025-10-23 19:46:40 -07:00
doctorpangloss
67f9d3e693 Improved custom nodes compatibility
- Fixed and verifies compatibility with the following nodes:
   ComfyUI-Manager==3.0.1
   ComfyUI_LayerStyle==1.0.90
   ComfyUI-Easy-Use==1.3.4 (required fix)
   ComfyUI-KJNodes==1.1.7 (required mitigation)
   ComfyUI_Custom_Nodes_AlekPet==1.0.88
   LanPaint==1.4.1
   Comfyui-Simple-Json-Node==1.1.0
 - Add support to referencing files in packages.
2025-10-23 16:28:10 -07:00
doctorpangloss
d707efe53c Add default font, add package fs, prepare to add more sherlocked nodes 2025-10-23 12:50:34 -07:00
doctorpangloss
72bb572181 Cache requests in nodes 2025-10-23 11:52:36 -07:00
doctorpangloss
7259c252ad Fix linting issue 2025-10-22 15:26:26 -07:00
doctorpangloss
95d8ca6c53 Test and fix cli args issues 2025-10-22 15:03:01 -07:00
doctorpangloss
6954e3e247 Fix torch.compiler.is_compiling missing on torch 2.3 and earlier 2025-10-22 13:37:20 -07:00
doctorpangloss
6810282be4 Update docker build instructions 2025-10-22 12:58:12 -07:00
doctorpangloss
b522208e09 Fix linting issue 2025-10-22 12:30:46 -07:00
doctorpangloss
aed4663bca Skip inference tests for now 2025-10-22 12:29:04 -07:00
doctorpangloss
674b69c291 Fix linting errors, use register_buffer 2025-10-22 12:16:09 -07:00
doctorpangloss
427c21309c Fix whitespace error in comfyui 2025-10-21 15:56:24 -07:00
doctorpangloss
232ec2a06a Include libcairo2-dev 2025-10-21 15:43:54 -07:00
doctorpangloss
fefcfaa2e4 Run inference tests on 3090 2025-10-21 15:21:02 -07:00
doctorpangloss
b4998009b0 tweak workflow to cd into the correct directory 2025-10-21 15:20:24 -07:00
doctorpangloss
be9856e4db Only use dGPU in this test 2025-10-21 15:05:38 -07:00
doctorpangloss
170b8348a3 fix torch error and suggestion-mode 2025-10-21 14:52:11 -07:00
doctorpangloss
2261f18306 Fix wrong cache type 2025-10-21 14:42:20 -07:00
doctorpangloss
9bfc6ff14d tweak dgpu 2025-10-21 14:35:52 -07:00
doctorpangloss
99be2c40fa add amd gfx1102 arch for pytorch attention 2025-10-21 14:33:54 -07:00
doctorpangloss
d9269785d3 Better memory trimming and group_offloading logic 2025-10-21 14:27:26 -07:00
doctorpangloss
7ed9292532 fix gguf logs to debug 2025-10-21 14:27:16 -07:00
doctorpangloss
5186d19441 Merge branch 'master' of github.com:comfyanonymous/ComfyUI into fix-merge 2025-10-21 10:56:30 -07:00
doctorpangloss
358cb834d6 fix tests, make fixture of core workflow test function to reclaim RAM better 2025-10-21 10:53:49 -07:00
doctorpangloss
f54af2c7ff Fix pylint errors 2025-10-21 10:53:49 -07:00
doctorpangloss
dc94081155 tune up mmaudio, nodes_eps 2025-10-21 10:53:49 -07:00
doctorpangloss
f1016ef1c1 fix syntax error 2025-10-21 10:53:48 -07:00
doctorpangloss
9d649a18f0 remove broken build process 2025-10-21 10:53:48 -07:00
doctorpangloss
45ae806beb move eps nodes 2025-10-21 10:53:48 -07:00
doctorpangloss
7648c0b065 move mochi nodes 2025-10-21 10:53:48 -07:00
doctorpangloss
7cbebbc11a rm mochi nodes 2025-10-21 10:53:48 -07:00
doctorpangloss
be56a14e65 Merge commit 'a4787ac83bf6c83eeb459ed80fc9b36f63d2a3a7' of github.com:comfyanonymous/ComfyUI into fix-merge 2025-10-21 10:53:43 -07:00
comfyanonymous
560b1bdfca ComfyUI version v0.3.66 2025-10-21 01:12:32 -04:00
comfyanonymous
b7992f871a
Revert "execution: fold in dependency aware caching / Fix --cache-none with l…" (#10422)
This reverts commit b1467da480.
2025-10-20 19:03:06 -04:00
comfyanonymous
2c2aa409b0
Log message for cudnn disable on AMD. (#10418) 2025-10-20 15:43:24 -04:00
ComfyUI Wiki
a4787ac83b
Update template to 0.2.1 (#10413)
* Update template to 0.1.97

* Update template to 0.2.1
2025-10-20 15:28:36 -04:00
Christian Byrne
b5c59b763c
Deprecation warning on unused files (#10387)
* only warn for unused files

* include internal extensions
2025-10-19 13:05:46 -07:00
comfyanonymous
b4f30bd408
Pytorch is stupid. (#10398) 2025-10-19 01:25:35 -04:00
comfyanonymous
dad076aee6
Speed up chroma radiance. (#10395) 2025-10-18 23:19:52 -04:00
comfyanonymous
0cf33953a7
Fix batch size above 1 giving bad output in chroma radiance. (#10394) 2025-10-18 23:15:34 -04:00
comfyanonymous
5b80addafd
Turn off cuda malloc by default when --fast autotune is turned on. (#10393) 2025-10-18 22:35:46 -04:00
comfyanonymous
9da397ea2f
Disable torch compiler for cast_bias_weight function (#10384)
* Disable torch compiler for cast_bias_weight function

* Fix torch compile.
2025-10-17 20:03:28 -04:00
comfyanonymous
92d97380bd
Update Python 3.14 installation instructions (#10385)
Removed mention of installing pytorch nightly for Python 3.14.
2025-10-17 18:22:59 -04:00
Alexander Piskun
99ce2a1f66
convert nodes_controlnet.py to V3 schema (#10202) 2025-10-17 14:13:05 -07:00
rattus128
b1467da480
execution: fold in dependency aware caching / Fix --cache-none with loops/lazy etc (#10368)
* execution: fold in dependency aware caching

This makes --cache-none compatiable with lazy and expanded
subgraphs.

Currently the --cache-none option is powered by the
DependencyAwareCache. The cache attempts to maintain a parallel
copy of the execution list data structure, however it is only
setup once at the start of execution and does not get meaninigful
updates to the execution list.

This causes multiple problems when --cache-none is used with lazy
and expanded subgraphs as the DAC does not accurately update its
copy of the execution data structure.

DAC has an attempt to handle subgraphs ensure_subcache however
this does not accurately connect to nodes outside the subgraph.
The current semantics of DAC are to free a node ASAP after the
dependent nodes are executed.

This means that if a subgraph refs such a node it will be requed
and re-executed by the execution_list but DAC wont see it in
its to-free lists anymore and leak memory.

Rather than try and cover all the cases where the execution list
changes from inside the cache, move the while problem to the
executor which maintains an always up-to-date copy of the wanted
data-structure.

The executor now has a fast-moving run-local cache of its own.
Each _to node has its own mini cache, and the cache is unconditionally
primed at the time of add_strong_link.

add_strong_link is called for all of static workflows, lazy links
and expanded subgraphs so its the singular source of truth for
output dependendencies.

In the case of a cache-hit, the executor cache will hold the non-none
value (it will respect updates if they happen somehow as well).

In the case of a cache-miss, the executor caches a None and will
wait for a notification to update the value when the node completes.

When a node completes execution, it simply releases its mini-cache
and in turn its strong refs on its direct anscestor outputs, allowing
for ASAP freeing (same as the DependencyAwareCache but a little more
automatic).

This now allows for re-implementation of --cache-none with no cache
at all. The dependency aware cache was also observing the dependency
sematics for the objects and UI cache which is not accurate (this
entire logic was always outputs specific).

This also prepares for more complex caching strategies (such as RAM
pressure based caching), where a cache can implement any freeing
strategy completely independently of the DepedancyAwareness
requirement.

* main: re-implement --cache-none as no cache at all

The execution list now tracks the dependency aware caching more
correctly that the DependancyAwareCache.

Change it to a cache that does nothing.

* test_execution: add --cache-none to the test suite

--cache-none is now expected to work universally. Run it through the
full unit test suite. Propagate the server parameterization for whether
or not the server is capabale of caching, so that the minority of tests
that specifically check for cache hits can if else. Hard assert NOT
caching in the else to give some coverage of --cache-none expected
behaviour to not acutally cache.
2025-10-17 13:55:15 -07:00
Jedrzej Kosinski
d8d60b5609
Do batch_slice in EasyCache's apply_cache_diff (#10376) 2025-10-17 00:39:37 -04:00
comfyanonymous
b1293d50ef
workaround also works on cudnn 91200 (#10375) 2025-10-16 19:59:56 -04:00
comfyanonymous
19b466160c
Workaround for nvidia issue where VAE uses 3x more memory on torch 2.9 (#10373) 2025-10-16 18:16:03 -04:00