Commit Graph

2361 Commits

Author SHA1 Message Date
doctorpangloss
cb97b94ad9 Unclear why this is throwing linting errors 2025-11-04 18:15:54 -08:00
doctorpangloss
98ae55b059 Improvements to compatibility with custom nodes, distributed
backends and other changes

 - remove uv.lock since it will not be used in most cases for installation
 - add cli args to prevent some custom nodes from installing packages at runtime
 - temp directories can now be shared between workers without being deleted
 - propcache yanked is now in the dependencies
 - fix configuration arguments loading in some tests
2025-11-04 17:40:19 -08:00
doctorpangloss
d9e3ba4bec Further improvements to logic nodes, lazy evaluation and related 2025-10-31 16:14:08 -07:00
doctorpangloss
97f911280e Improve lazy graph evaluation, add logic operators 2025-10-31 14:27:27 -07:00
Benjamin Berman
6f2589f256 wip latent nodes can return None for graceful behavior in multi-reference-latent scenarios 2025-10-30 12:38:02 -07:00
Benjamin Berman
82bffb7855 Better integration with logic nodes from EasyUse
- ImageRequestParameter now returns None or a provided default when the value of its path / URL is empty, instead of erroring
 - Custom nodes which touch nodes.NODE_CLASS_MAPPINGS will once again see all the nodes available during execution, instead of only the base nodes
2025-10-29 15:36:35 -07:00
Benjamin Berman
05dd159136 EasyUse will try to send_sync during module initialization, which is invalid. This tolerates such calls. 2025-10-28 17:32:06 -07:00
doctorpangloss
6f533f2f97 Fix extra paths loading in worker 2025-10-26 08:21:53 -07:00
doctorpangloss
df786c59ce Fix unit tests 2025-10-24 10:51:18 -07:00
doctorpangloss
4c67f75d36 Fix pylint issues 2025-10-24 10:12:20 -07:00
doctorpangloss
607fcf7321 Fix trim 2025-10-23 19:49:59 -07:00
doctorpangloss
058e5dc634 Fixes to tests and configuration, making library use more durable 2025-10-23 19:46:40 -07:00
doctorpangloss
67f9d3e693 Improved custom nodes compatibility
- Fixed and verifies compatibility with the following nodes:
   ComfyUI-Manager==3.0.1
   ComfyUI_LayerStyle==1.0.90
   ComfyUI-Easy-Use==1.3.4 (required fix)
   ComfyUI-KJNodes==1.1.7 (required mitigation)
   ComfyUI_Custom_Nodes_AlekPet==1.0.88
   LanPaint==1.4.1
   Comfyui-Simple-Json-Node==1.1.0
 - Add support to referencing files in packages.
2025-10-23 16:28:10 -07:00
doctorpangloss
d707efe53c Add default font, add package fs, prepare to add more sherlocked nodes 2025-10-23 12:50:34 -07:00
doctorpangloss
72bb572181 Cache requests in nodes 2025-10-23 11:52:36 -07:00
doctorpangloss
7259c252ad Fix linting issue 2025-10-22 15:26:26 -07:00
doctorpangloss
95d8ca6c53 Test and fix cli args issues 2025-10-22 15:03:01 -07:00
doctorpangloss
6954e3e247 Fix torch.compiler.is_compiling missing on torch 2.3 and earlier 2025-10-22 13:37:20 -07:00
doctorpangloss
b522208e09 Fix linting issue 2025-10-22 12:30:46 -07:00
doctorpangloss
674b69c291 Fix linting errors, use register_buffer 2025-10-22 12:16:09 -07:00
doctorpangloss
427c21309c Fix whitespace error in comfyui 2025-10-21 15:56:24 -07:00
doctorpangloss
170b8348a3 fix torch error and suggestion-mode 2025-10-21 14:52:11 -07:00
doctorpangloss
2261f18306 Fix wrong cache type 2025-10-21 14:42:20 -07:00
doctorpangloss
99be2c40fa add amd gfx1102 arch for pytorch attention 2025-10-21 14:33:54 -07:00
doctorpangloss
d9269785d3 Better memory trimming and group_offloading logic 2025-10-21 14:27:26 -07:00
doctorpangloss
7ed9292532 fix gguf logs to debug 2025-10-21 14:27:16 -07:00
doctorpangloss
5186d19441 Merge branch 'master' of github.com:comfyanonymous/ComfyUI into fix-merge 2025-10-21 10:56:30 -07:00
doctorpangloss
358cb834d6 fix tests, make fixture of core workflow test function to reclaim RAM better 2025-10-21 10:53:49 -07:00
doctorpangloss
f54af2c7ff Fix pylint errors 2025-10-21 10:53:49 -07:00
doctorpangloss
dc94081155 tune up mmaudio, nodes_eps 2025-10-21 10:53:49 -07:00
doctorpangloss
f1016ef1c1 fix syntax error 2025-10-21 10:53:48 -07:00
doctorpangloss
be56a14e65 Merge commit 'a4787ac83bf6c83eeb459ed80fc9b36f63d2a3a7' of github.com:comfyanonymous/ComfyUI into fix-merge 2025-10-21 10:53:43 -07:00
comfyanonymous
2c2aa409b0
Log message for cudnn disable on AMD. (#10418) 2025-10-20 15:43:24 -04:00
comfyanonymous
b4f30bd408
Pytorch is stupid. (#10398) 2025-10-19 01:25:35 -04:00
comfyanonymous
dad076aee6
Speed up chroma radiance. (#10395) 2025-10-18 23:19:52 -04:00
comfyanonymous
0cf33953a7
Fix batch size above 1 giving bad output in chroma radiance. (#10394) 2025-10-18 23:15:34 -04:00
comfyanonymous
5b80addafd
Turn off cuda malloc by default when --fast autotune is turned on. (#10393) 2025-10-18 22:35:46 -04:00
comfyanonymous
9da397ea2f
Disable torch compiler for cast_bias_weight function (#10384)
* Disable torch compiler for cast_bias_weight function

* Fix torch compile.
2025-10-17 20:03:28 -04:00
comfyanonymous
b1293d50ef
workaround also works on cudnn 91200 (#10375) 2025-10-16 19:59:56 -04:00
comfyanonymous
19b466160c
Workaround for nvidia issue where VAE uses 3x more memory on torch 2.9 (#10373) 2025-10-16 18:16:03 -04:00
Faych
afa8a24fe1
refactor: Replace manual patches merging with merge_nested_dicts (#10360) 2025-10-15 17:16:09 -07:00
Jedrzej Kosinski
493b81e48f
Fix order of inputs nested merge_nested_dicts (#10362) 2025-10-15 16:47:26 -07:00
comfyanonymous
1c10b33f9b
gfx942 doesn't support fp8 operations. (#10348) 2025-10-15 00:21:11 -04:00
comfyanonymous
3374e900d0
Faster workflow cancelling. (#10301) 2025-10-13 23:43:53 -04:00
comfyanonymous
dfff7e5332
Better memory estimation for the SD/Flux VAE on AMD. (#10334) 2025-10-13 22:37:19 -04:00
comfyanonymous
e4ea393666
Fix loading old stable diffusion ckpt files on newer numpy. (#10333) 2025-10-13 22:18:58 -04:00
comfyanonymous
c8674bc6e9
Enable RDNA4 pytorch attention on ROCm 7.0 and up. (#10332) 2025-10-13 21:19:03 -04:00
rattus128
95ca2e56c8
WAN2.2: Fix cache VRAM leak on error (#10308)
Same change pattern as 7e8dd275c2
applied to WAN2.2

If this suffers an exception (such as a VRAM oom) it will leave the
encode() and decode() methods which skips the cleanup of the WAN
feature cache. The comfy node cache then ultimately keeps a reference
this object which is in turn reffing large tensors from the failed
execution.

The feature cache is currently setup at a class variable on the
encoder/decoder however, the encode and decode functions always clear
it on both entry and exit of normal execution.

Its likely the design intent is this is usable as a streaming encoder
where the input comes in batches, however the functions as they are
today don't support that.

So simplify by bringing the cache back to local variable, so that if
it does VRAM OOM the cache itself is properly garbage when the
encode()/decode() functions dissappear from the stack.
2025-10-13 15:23:11 -04:00
comfyanonymous
e693e4db6a
Always set diffusion model to eval() mode. (#10331) 2025-10-13 14:57:27 -04:00
comfyanonymous
a125cd84b0
Improve AMD performance. (#10302)
I honestly have no idea why this improves things but it does.
2025-10-12 00:28:01 -04:00