* fix: pin SQLAlchemy>=2.0 in requirements.txt (fixes#13036) (#13316)
* Refactor io to IO in nodes_ace.py (#13485)
* Bump comfyui-frontend-package to 1.42.12 (#13489)
* Make the ltx audio vae more native. (#13486)
* feat(api-nodes): add automatic downscaling of videos for ByteDance 2 nodes (#13465)
* Support standalone LTXV audio VAEs (#13499)
* [Partner Nodes] added 4K resolution for Veo models; added Veo 3 Lite model (#13330)
* feat(api nodes): added 4K resolution for Veo models; added Veo 3 Lite model
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* increase poll_interval from 5 to 9
---------
Signed-off-by: bigcat88 <bigcat88@icloud.com>
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>
* Bump comfyui-frontend-package to 1.42.14 (#13493)
* Add gpt-image-2 as version option (#13501)
* Allow logging in comfy app files. (#13505)
* chore: update workflow templates to v0.9.59 (#13507)
* fix(veo): reject 4K resolution for veo-3.0 models in Veo3VideoGenerationNode (#13504)
The tooltip on the resolution input states that 4K is not available for
veo-3.1-lite or veo-3.0 models, but the execute guard only rejected the
lite combination. Selecting 4K with veo-3.0-generate-001 or
veo-3.0-fast-generate-001 would fall through and hit the upstream API
with an invalid request.
Broaden the guard to match the documented behavior and update the error
message accordingly.
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>
* feat: RIFE and FILM frame interpolation model support (CORE-29) (#13258)
* initial RIFE support
* Also support FILM
* Better RAM usage, reduce FILM VRAM peak
* Add model folder placeholder
* Fix oom fallback frame loss
* Remove torch.compile for now
* Rename model input
* Shorter input type name
---------
* fix: use Parameter assignment for Stable_Zero123 cc_projection weights (fixes#13492) (#13518)
On Windows with aimdo enabled, disable_weight_init.Linear uses lazy
initialization that sets weight and bias to None to avoid unnecessary
memory allocation. This caused a crash when copy_() was called on the
None weight attribute in Stable_Zero123.__init__.
Replace copy_() with direct torch.nn.Parameter assignment, which works
correctly on both Windows (aimdo enabled) and other platforms.
* Derive InterruptProcessingException from BaseException (#13523)
* bump manager version to 4.2.1 (#13516)
* ModelPatcherDynamic: force cast stray weights on comfy layers (#13487)
the mixed_precision ops can have input_scale parameters that are used
in tensor math but arent a weight or bias so dont get proper VRAM
management. Treat these as force-castable parameters like the non comfy
weight, random params are buffers already are.
* Update logging level for invalid version format (#13526)
* [Partner Nodes] add SD2 real human support (#13509)
* feat(api-nodes): add SD2 real human support
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* fix: add validation before uploading Assets
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* Add asset_id and group_id displaying on the node
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* extend poll_op to use instead of custom async cycle
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* added the polling for the "Active" status after asset creation
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* updated tooltip for group_id
* allow usage of real human in the ByteDance2FirstLastFrame node
* add reference count limits
* corrected price in status when input assets contain video
Signed-off-by: bigcat88 <bigcat88@icloud.com>
---------
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* feat: SAM (segment anything) 3.1 support (CORE-34) (#13408)
* [Partner Nodes] GPTImage: fix price badges, add new resolutions (#13519)
* fix(api-nodes): fixed price badges, add new resolutions
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* proper calculate the total run cost when "n > 1"
Signed-off-by: bigcat88 <bigcat88@icloud.com>
---------
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* chore: update workflow templates to v0.9.61 (#13533)
* chore: update embedded docs to v0.4.4 (#13535)
* add 4K resolution to Kling nodes (#13536)
Signed-off-by: bigcat88 <bigcat88@icloud.com>
* Fix LTXV Reference Audio node (#13531)
* comfy-aimdo 0.2.14: Hotfix async allocator estimations (#13534)
This was doing an over-estimate of VRAM used by the async allocator when lots
of little small tensors were in play.
Also change the versioning scheme to == so we can roll forward aimdo without
worrying about stable regressions downstream in comfyUI core.
* Disable sageattention for SAM3 (#13529)
Causes Nans
* execution: Add anti-cycle validation (#13169)
Currently if the graph contains a cycle, the just inifitiate recursions,
hits a catch all then throws a generic error against the output node
that seeded the validation. Instead, fail the offending cycling mode
chain and handlng it as an error in its own right.
Co-authored-by: guill <jacob.e.segal@gmail.com>
* chore: update workflow templates to v0.9.62 (#13539)
---------
Signed-off-by: bigcat88 <bigcat88@icloud.com>
Co-authored-by: Octopus <liyuan851277048@icloud.com>
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Co-authored-by: Comfy Org PR Bot <snomiao+comfy-pr@gmail.com>
Co-authored-by: Alexander Piskun <13381981+bigcat88@users.noreply.github.com>
Co-authored-by: Jukka Seppänen <40791699+kijai@users.noreply.github.com>
Co-authored-by: AustinMroz <austin@comfy.org>
Co-authored-by: Daxiong (Lin) <contact@comfyui-wiki.com>
Co-authored-by: Matt Miller <matt@miller-media.com>
Co-authored-by: blepping <157360029+blepping@users.noreply.github.com>
Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
Co-authored-by: rattus <46076784+rattus128@users.noreply.github.com>
Co-authored-by: guill <jacob.e.segal@gmail.com>
There was an issue where the resample split was too early and dropped one
of the rolling convolutions a frame early. This is most noticable as a
lighting/color change between pixel frames 5->6 (latent 2->3), or as a
lighting change between the first and last frame in an FLF wan flow.
* sd: soft_empty_cache on tiler fallback
This doesnt cost a lot and creates the expected VRAM reduction in
resource monitors when you fallback to tiler.
* wan: vae: Don't recursion in local fns (move run_up)
Moved Decoder3d’s recursive run_up out of forward into a class
method to avoid nested closure self-reference cycles. This avoids
cyclic garbage that delays garbage of tensors which in turn delays
VRAM release before tiled fallback.
* ltx: vae: Don't recursion in local fns (move run_up)
Mov the recursive run_up out of forward into a class
method to avoid nested closure self-reference cycles. This avoids
cyclic garbage that delays garbage of tensors which in turn delays
VRAM release before tiled fallback.
* ltx: vae: add cache state to downsample block
* ltx: vae: Add time stride awareness to causal_conv_3d
* ltx: vae: Automate truncation for encoder
Other VAEs just truncate without error. Do the same.
* sd/ltx: Make chunked_io a flag in its own right
Taking this bi-direcitonal, so make it a for-purpose named flag.
* ltx: vae: implement chunked encoder + CPU IO chunking
People are doing things with big frame counts in LTX including V2V
flows. Implement the time-chunked encoder to keep the VRAM down, with
the converse of the new CPU pre-allocation technique, where the chunks
are brought from the CPU JIT.
* ltx: vae-encode: round chunk sizes more strictly
Only powers of 2 and multiple of 8 are valid due to cache slicing.
* wan: vae: encoder: Add feature cache layer that corks singles
If a downsample only gives you a single frame, save it to the feature
cache and return nothing to the top level. This increases the
efficiency of cacheability, but also prepares support for going two
by two rather than four by four on the frames.
* wan: remove all concatentation with the feature cache
The loopers are now responsible for ensuring that non-final frames are
processes at least two-by-two, elimiating the need for this cat case.
* wan: vae: recurse and chunk for 2+2 frames on decode
Avoid having to clone off slices of 4 frame chunks and reduce the size
of the big 6 frame convolutions down to 4. Save the VRAMs.
* wan: encode frames 2x2.
Reduce VRAM usage greatly by encoding frames 2 at a time rather than
4.
* wan: vae: remove cloning
The loopers now control the chunking such there is noever more than 2
frames, so just cache these slices directly and avoid the clone
allocations completely.
* wan: vae: free consumer caller tensors on recursion
* wan: vae: restyle a little to match LTX
* ltx: vae: scale the chunk size with the users VRAM
Scale this linearly down for users with low VRAM.
* ltx: vae: free non-chunking recursive intermediates
* ltx: vae: cleanup some intermediates
The conv layer can be the VRAM peak and it does a torch.cat. So cleanup
the pieces of the cat. Also clear our the cache ASAP as each layer detect
its end as this VAE surges in VRAM at the end due to the ended padding
increasing the size of the final frame convolutions off-the-books to
the chunker. So if all the earlier layers free up their cache it can
offset that surge.
Its a fragmentation nightmare, and the chance of it having to recache the
pyt allocator is very high, but you wont OOM.
Pytorch only filters for OOMs in its own allocators however there are
paths that can OOM on allocators made outside the pytorch allocators.
These manifest as an AllocatorError as pytorch does not have universal
error translation to its OOM type on exception. Handle it. A log I have
for this also shows a double report of the error async, so call the
async discarder to cleanup and make these OOMs look like OOMs.
* draft zeta (z-image pixel space)
* revert gitignore
* model loaded and able to run however vector direction still wrong tho
* flip the vector direction to original again this time
* Move wrongly positioned Z image pixel space class
* inherit Radiance LatentFormat class
* Fix parameters in classes for Zeta x0 dino
* remove arbitrary nn.init instances
* Remove unused import of lru_cache
---------
Co-authored-by: silveroxides <ishimarukaito@gmail.com>
Implements per-guide attention attenuation via log-space additive bias
in self-attention. Each guide reference tracks its own strength and
optional spatial mask in conditioning metadata (guide_attention_entries).