Commit Graph

5472 Commits

Author SHA1 Message Date
Alexander Piskun
dfac94695b
fix img2img operation in Dall2 node (#10552) 2025-10-30 10:22:35 -07:00
Alexander Piskun
163b629c70
use new API client in Pixverse and Ideogram nodes (#10543) 2025-10-29 23:49:03 -07:00
Jedrzej Kosinski
998bf60beb
Add units/info for the numbers displayed on 'load completely' and 'load partially' log messages (#10538) 2025-10-29 19:37:06 -04:00
comfyanonymous
906c089957
Fix small performance regression with fp8 fast and scaled fp8. (#10537) 2025-10-29 19:29:01 -04:00
Benjamin Berman
82bffb7855 Better integration with logic nodes from EasyUse
- ImageRequestParameter now returns None or a provided default when the value of its path / URL is empty, instead of erroring
 - Custom nodes which touch nodes.NODE_CLASS_MAPPINGS will once again see all the nodes available during execution, instead of only the base nodes
2025-10-29 15:36:35 -07:00
comfyanonymous
25de7b1bfa
Try to fix slow load issue on low ram hardware with pinned mem. (#10536) 2025-10-29 17:20:27 -04:00
rattus
ab7ab5be23
Fix Race condition in --async-offload that can cause corruption (#10501)
* mm: factor out the current stream getter

Make this a reusable function.

* ops: sync the offload stream with the consumption of w&b

This sync is nessacary as pytorch will queue cuda async frees on the
same stream as created to tensor. In the case of async offload, this
will be on the offload stream.

Weights and biases can go out of scope in python which then
triggers the pytorch garbage collector to queue the free operation on
the offload stream possible before the compute stream has used the
weight. This causes a use after free on weight data leading to total
corruption of some workflows.

So sync the offload stream with the compute stream after the weight
has been used so the free has to wait for the weight to be used.

The cast_bias_weight is extended in a backwards compatible way with
the new behaviour opt-in on a defaulted parameter. This handles
custom node packs calling cast_bias_weight and defeatures
async-offload for them (as they do not handle the race).

The pattern is now:

cast_bias_weight(... , offloadable=True) #This might be offloaded
thing(weight, bias, ...)
uncast_bias_weight(...)

* controlnet: adopt new cast_bias_weight synchronization scheme

This is nessacary for safe async weight offloading.

* mm: sync the last stream in the queue, not the next

Currently this peeks ahead to sync the next stream in the queue of
streams with the compute stream. This doesnt allow a lot of
parallelization, as then end result is you can only get one weight load
ahead regardless of how many streams you have.

Rotate the loop logic here to synchronize the end of the queue before
returning the next stream. This allows weights to be loaded ahead of the
compute streams position.
2025-10-29 17:17:46 -04:00
comfyanonymous
ec4fc2a09a
Fix case of weights not being unpinned. (#10533) 2025-10-29 15:48:06 -04:00
comfyanonymous
1a58087ac2
Reduce memory usage for fp8 scaled op. (#10531) 2025-10-29 15:43:51 -04:00
Alexander Piskun
6c14f3afac
use new API client in Luma and Minimax nodes (#10528) 2025-10-29 11:14:56 -07:00
Benjamin Berman
2d2d625ed0 Fix unit tests (no code changes, no re-release) 2025-10-29 05:26:10 -07:00
comfyanonymous
e525673f72
Fix issue. (#10527) 2025-10-29 00:37:00 -04:00
comfyanonymous
3fa7a5c04a
Speed up offloading using pinned memory. (#10526)
To enable this feature use: --fast pinned_memory
2025-10-29 00:21:01 -04:00
Benjamin Berman
05dd159136 EasyUse will try to send_sync during module initialization, which is invalid. This tolerates such calls. 2025-10-28 17:32:06 -07:00
Benjamin Berman
7318e59baf Fix regexp outputs and give better signals when there's no result 2025-10-28 17:25:49 -07:00
Alexander Piskun
210f7a1ba5
convert nodes_recraft.py to V3 schema (#10507) 2025-10-28 14:38:05 -07:00
rattus
d202c2ba74
execution: Allow a subgraph nodes to execute multiple times (#10499)
In the case of --cache-none lazy and subgraph execution can cause
anything to be run multiple times per workflow. If that rerun nodes is
in itself a subgraph generator, this will crash for two reasons.

pending_subgraph_results[] does not cleanup entries after their use.
So when a pending_subgraph_result is consumed, remove it from the list
so that if the corresponding node is fully re-executed this misses
lookup and it fall through to execute the node as it should.

Secondly, theres is an explicit enforcement against dups in the
addition of subgraphs nodes as ephemerals to the dymprompt. Remove this
enforcement as the use case is now valid.
2025-10-28 16:22:08 -04:00
contentis
8817f8fc14
Mixed Precision Quantization System (#10498)
* Implement mixed precision operations with a registry design and metadate for quant spec in checkpoint.

* Updated design using Tensor Subclasses

* Fix FP8 MM

* An actually functional POC

* Remove CK reference and ensure correct compute dtype

* Update unit tests

* ruff lint

* Implement mixed precision operations with a registry design and metadate for quant spec in checkpoint.

* Updated design using Tensor Subclasses

* Fix FP8 MM

* An actually functional POC

* Remove CK reference and ensure correct compute dtype

* Update unit tests

* ruff lint

* Fix missing keys

* Rename quant dtype parameter

* Rename quant dtype parameter

* Fix unittests for CPU build
2025-10-28 16:20:53 -04:00
comfyanonymous
22e40d2ace
Tell users to update their nvidia drivers if portable doesn't start. (#10518) 2025-10-28 15:08:08 -04:00
comfyanonymous
3bea4efc6b
Tell users to update nvidia drivers if problem with portable. (#10510) 2025-10-28 04:45:45 -04:00
comfyanonymous
8cf2ba4ba6
Remove comfy api key from queue api. (#10502) 2025-10-28 03:23:52 -04:00
comfyanonymous
b61a40cbc9
Bump stable portable to cu130 python 3.13.9 (#10508) 2025-10-28 03:21:45 -04:00
comfyanonymous
f2bb3230b7 ComfyUI version v0.3.67 2025-10-28 03:03:59 -04:00
Jedrzej Kosinski
614b8d3345
frontend bump to 1.28.8 (#10506) 2025-10-28 03:01:13 -04:00
ComfyUI Wiki
6abc30aae9
Update template to 0.2.4 (#10505) 2025-10-28 01:56:30 -04:00
Alexander Piskun
55bad30375
feat(api-nodes): add LTXV API nodes (#10496) 2025-10-27 22:25:29 -07:00
ComfyUI Wiki
c305deed56
Update template to 0.2.3 (#10503) 2025-10-27 22:24:16 -07:00
comfyanonymous
601ee1775a
Add a bat to run comfyui portable without api nodes. (#10504) 2025-10-27 23:54:00 -04:00
comfyanonymous
c170fd2db5
Bump portable deps workflow to torch cu130 python 3.13.9 (#10493) 2025-10-26 20:23:01 -04:00
doctorpangloss
6f533f2f97 Fix extra paths loading in worker 2025-10-26 08:21:53 -07:00
Alexander Piskun
9d529e5308
fix(api-nodes): random issues on Windows by capturing general OSError for retries (#10486) 2025-10-25 23:51:06 -07:00
comfyanonymous
f6bbc1ac84
Fix mistake. (#10484) 2025-10-25 23:07:29 -04:00
comfyanonymous
098a352f13
Add warning for torch-directml usage (#10482)
Added a warning message about the state of torch-directml.
2025-10-25 20:05:22 -04:00
Alexander Piskun
e86b79ab9e
convert Gemini API nodes to V3 schema (#10476) 2025-10-25 14:35:30 -07:00
comfyanonymous
426cde37f1
Remove useless function (#10472) 2025-10-24 19:56:51 -04:00
Alexander Piskun
dd5af0c587
convert Tripo API nodes to V3 schema (#10469) 2025-10-24 15:48:34 -07:00
doctorpangloss
df786c59ce Fix unit tests 2025-10-24 10:51:18 -07:00
doctorpangloss
4c67f75d36 Fix pylint issues 2025-10-24 10:12:20 -07:00
Alexander Piskun
388b306a2b
feat(api-nodes): network client v2: async ops, cancellation, downloads, refactor (#10390)
* feat(api-nodes): implement new API client for V3 nodes

* feat(api-nodes): implement new API client for V3 nodes

* feat(api-nodes): implement new API client for V3 nodes

* converted WAN nodes to use new client; polishing

* fix(auth): do not leak authentification for the absolute urls

* convert BFL API nodes to use new API client; remove deprecated BFL nodes

* converted Google Veo nodes

* fix(Veo3.1 model): take into account "generate_audio" parameter
2025-10-23 22:37:16 -07:00
ComfyUI Wiki
24188b3141
Update template to 0.2.2 (#10461)
Fix template typo issue
2025-10-24 01:36:30 -04:00
doctorpangloss
607fcf7321 Fix trim 2025-10-23 19:49:59 -07:00
doctorpangloss
058e5dc634 Fixes to tests and configuration, making library use more durable 2025-10-23 19:46:40 -07:00
comfyanonymous
1bcda6df98
WIP way to support multi multi dimensional latents. (#10456) 2025-10-23 21:21:14 -04:00
doctorpangloss
67f9d3e693 Improved custom nodes compatibility
- Fixed and verifies compatibility with the following nodes:
   ComfyUI-Manager==3.0.1
   ComfyUI_LayerStyle==1.0.90
   ComfyUI-Easy-Use==1.3.4 (required fix)
   ComfyUI-KJNodes==1.1.7 (required mitigation)
   ComfyUI_Custom_Nodes_AlekPet==1.0.88
   LanPaint==1.4.1
   Comfyui-Simple-Json-Node==1.1.0
 - Add support to referencing files in packages.
2025-10-23 16:28:10 -07:00
doctorpangloss
d707efe53c Add default font, add package fs, prepare to add more sherlocked nodes 2025-10-23 12:50:34 -07:00
doctorpangloss
72bb572181 Cache requests in nodes 2025-10-23 11:52:36 -07:00
doctorpangloss
7259c252ad Fix linting issue 2025-10-22 15:26:26 -07:00
doctorpangloss
95d8ca6c53 Test and fix cli args issues 2025-10-22 15:03:01 -07:00
comfyanonymous
a1864c01f2
Small readme improvement. (#10442) 2025-10-22 17:26:22 -04:00
doctorpangloss
6954e3e247 Fix torch.compiler.is_compiling missing on torch 2.3 and earlier 2025-10-22 13:37:20 -07:00