Commit Graph

2529 Commits

Author SHA1 Message Date
comfyanonymous
1fee8827cb
Support for qwen edit plus model. Use the new TextEncodeQwenImageEditPlus. (#9986) 2025-09-22 16:49:48 -04:00
comfyanonymous
d1d9eb94b1
Lower wan memory estimation value a bit. (#9964)
Previous pr reduced the peak memory requirement.
2025-09-20 22:09:35 -04:00
Kohaku-Blueleaf
7be2b49b6b
Fix LoRA Trainer bugs with FP8 models. (#9854)
* Fix adapter weight init

* Fix fp8 model training

* Avoid inference tensor
2025-09-20 21:24:48 -04:00
comfyanonymous
e8df53b764
Update WanAnimateToVideo to more easily extend videos. (#9959) 2025-09-19 18:48:56 -04:00
doctorpangloss
b9368317af Various fixes
- raise on file load failures in the base nodes
 - transformers models should load with trust_remote_code False whenever possible
 - fix canonicalize_map call for windows-linux interopability
2025-09-19 14:54:54 -07:00
comfyanonymous
dc95b6acc0
Basic WIP support for the wan animate model. (#9939) 2025-09-19 03:07:17 -04:00
comfyanonymous
24b0fce099
Do padding of audio embed in model for humo for more flexibility. (#9935) 2025-09-18 19:54:16 -04:00
DELUXA
8d6653fca6
Enable fp8 ops by default on gfx1200 (#9926) 2025-09-18 19:50:37 -04:00
doctorpangloss
ffbb2f7cd3 Fix bug in model_downloader 2025-09-18 14:47:28 -07:00
doctorpangloss
e6762bb82a Use pylint dynamic member correctly 2025-09-18 13:42:05 -07:00
doctorpangloss
bc201cea4d Improve tests, fix issues with alternate filenames, improve group offloading support for transformers models 2025-09-18 13:25:08 -07:00
doctorpangloss
79b8723f61 Various fixes
- Fix 16 bit exif saving for PNGs
 - Validate alternative filenames correctly
 - Improve speed of test workflows by setting steps to 1
2025-09-17 16:04:05 -07:00
comfyanonymous
dd611a7700
Support the HuMo 17B model. (#9912) 2025-09-17 18:39:24 -04:00
Benjamin Berman
db546c4826 Sometimes progress messages of images with metadata are not valid. Need to investigate why. 2025-09-16 22:04:39 -07:00
comfyanonymous
9288c78fc5
Support the HuMo model. (#9903) 2025-09-17 00:12:48 -04:00
rattus128
e42682b24e
Reduce Peak WAN inference VRAM usage (#9898)
* flux: Do the xq and xk ropes one at a time

This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.

Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.

This reduces peak VRAM usage for some WAN2.2 inferences (at least).

* wan: Optimize qkv intermediates on attention

As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
2025-09-16 19:21:14 -04:00
comfyanonymous
a39ac59c3e
Add encoder part of whisper large v3 as an audio encoder model. (#9894)
Not useful yet but some models use it.
2025-09-16 01:19:50 -04:00
blepping
1a85483da1
Fix depending on asserts to raise an exception in BatchedBrownianTree and Flash attn module (#9884)
Correctly handle the case where w0 is passed by kwargs in BatchedBrownianTree
2025-09-15 20:05:03 -04:00
comfyanonymous
47a9cde5d3
Support the omnigen2 umo lora. (#9886) 2025-09-15 18:10:55 -04:00
Jedrzej Kosinski
f228367c5e
Make ModuleNotFoundError ImportError instead (#9850) 2025-09-13 21:34:21 -04:00
comfyanonymous
80b7c9455b
Changes to the previous radiance commit. (#9851) 2025-09-13 18:03:34 -04:00
blepping
c1297f4eb3
Add support for Chroma Radiance (#9682)
* Initial Chroma Radiance support

* Minor Chroma Radiance cleanups

* Update Radiance nodes to ensure latents/images are on the intermediate device

* Fix Chroma Radiance memory estimation.

* Increase Chroma Radiance memory usage factor

* Increase Chroma Radiance memory usage factor once again

* Ensure images are multiples of 16 for Chroma Radiance
Add batch dimension and fix channels when necessary in ChromaRadianceImageToLatent node

* Tile Chroma Radiance NeRF to reduce memory consumption, update memory usage factor

* Update Radiance to support conv nerf final head type.

* Allow setting NeRF embedder dtype for Radiance
Bump Radiance nerf tile size to 32
Support EasyCache/LazyCache on Radiance (maybe)

* Add ChromaRadianceStubVAE node

* Crop Radiance image inputs to multiples of 16 instead of erroring to be in line with existing VAE behavior

* Convert Chroma Radiance nodes to V3 schema.

* Add ChromaRadianceOptions node and backend support.
Cleanups/refactoring to reduce code duplication with Chroma.

* Fix overriding the NeRF embedder dtype for Chroma Radiance

* Minor Chroma Radiance cleanups

* Move Chroma Radiance to its own directory in ldm
Minor code cleanups and tooltip improvements

* Fix Chroma Radiance embedder dtype overriding

* Remove Radiance dynamic nerf_embedder dtype override feature

* Unbork Radiance NeRF embedder init

* Remove Chroma Radiance image conversion and stub VAE nodes
Add a chroma_radiance option to the VAELoader builtin node which uses comfy.sd.PixelspaceConversionVAE
Add a PixelspaceConversionVAE to comfy.sd for converting BHWC 0..1 <-> BCHW -1..1
2025-09-13 17:58:43 -04:00
Kimbing Ng
e5e70636e7
Remove single quote pattern to avoid wrong matches (#9842) 2025-09-13 16:59:19 -04:00
comfyanonymous
29bf807b0e
Cleanup. (#9838) 2025-09-12 21:57:04 -04:00
Jukka Seppänen
2559dee492
Support wav2vec base models (#9637)
* Support wav2vec base models

* trim trailing whitespace

* Do interpolation after
2025-09-12 21:52:58 -04:00
comfyanonymous
a3b04de700
Hunyuan refiner vae now works with tiled. (#9836) 2025-09-12 19:46:46 -04:00
Jedrzej Kosinski
d7f40442f9
Enable Runtime Selection of Attention Functions (#9639)
* Looking into a @wrap_attn decorator to look for 'optimized_attention_override' entry in transformer_options

* Created logging code for this branch so that it can be used to track down all the code paths where transformer_options would need to be added

* Fix memory usage issue with inspect

* Made WAN attention receive transformer_options, test node added to wan to test out attention override later

* Added **kwargs to all attention functions so transformer_options could potentially be passed through

* Make sure wrap_attn doesn't make itself recurse infinitely, attempt to load SageAttention and FlashAttention if not enabled so that they can be marked as available or not, create registry for available attention

* Turn off attention logging for now, make AttentionOverrideTestNode have a dropdown with available attention (this is a test node only)

* Make flux work with optimized_attention_override

* Add logs to verify optimized_attention_override is passed all the way into attention function

* Make Qwen work with optimized_attention_override

* Made hidream work with optimized_attention_override

* Made wan patches_replace work with optimized_attention_override

* Made SD3 work with optimized_attention_override

* Made HunyuanVideo work with optimized_attention_override

* Made Mochi work with optimized_attention_override

* Made LTX work with optimized_attention_override

* Made StableAudio work with optimized_attention_override

* Made optimized_attention_override work with ACE Step

* Made Hunyuan3D work with optimized_attention_override

* Make CosmosPredict2 work with optimized_attention_override

* Made CosmosVideo work with optimized_attention_override

* Made Omnigen 2 work with optimized_attention_override

* Made StableCascade work with optimized_attention_override

* Made AuraFlow work with optimized_attention_override

* Made Lumina work with optimized_attention_override

* Made Chroma work with optimized_attention_override

* Made SVD work with optimized_attention_override

* Fix WanI2VCrossAttention so that it expects to receive transformer_options

* Fixed Wan2.1 Fun Camera transformer_options passthrough

* Fixed WAN 2.1 VACE transformer_options passthrough

* Add optimized to get_attention_function

* Disable attention logs for now

* Remove attention logging code

* Remove _register_core_attention_functions, as we wouldn't want someone to call that, just in case

* Satisfy ruff

* Remove AttentionOverrideTest node, that's something to cook up for later
2025-09-12 18:07:38 -04:00
comfyanonymous
b149e2e1e3
Better way of doing the generator for the hunyuan image noise aug. (#9834) 2025-09-12 17:53:15 -04:00
comfyanonymous
7757d5a657
Set default hunyuan refiner shift to 4.0 (#9833) 2025-09-12 16:40:12 -04:00
comfyanonymous
e600520f8a
Fix hunyuan refiner blownout colors at noise aug less than 0.25 (#9832) 2025-09-12 16:35:34 -04:00
comfyanonymous
fd2b820ec2
Add noise augmentation to hunyuan image refiner. (#9831)
This was missing and should help with colors being blown out.
2025-09-12 16:03:08 -04:00
doctorpangloss
7cd6383110 Fix distributed previews, add sageattention to the docker container 2025-09-12 11:48:25 -07:00
comfyanonymous
33bd9ed9cb
Implement hunyuan image refiner model. (#9817) 2025-09-12 00:43:20 -04:00
comfyanonymous
18de0b2830
Fast preview for hunyuan image. (#9814) 2025-09-11 19:33:02 -04:00
comfyanonymous
e01e99d075
Support hunyuan image distilled model. (#9807) 2025-09-10 23:17:34 -04:00
doctorpangloss
421c9b88ae Fix import order 2025-09-10 16:36:03 -07:00
comfyanonymous
543888d3d8
Fix lowvram issue with hunyuan image vae. (#9794) 2025-09-10 02:15:34 -04:00
comfyanonymous
85e34643f8
Support hunyuan image 2.1 regular model. (#9792) 2025-09-10 02:05:07 -04:00
comfyanonymous
5c33872e2f
Fix issue on old torch. (#9791) 2025-09-10 00:23:47 -04:00
doctorpangloss
67432bee6c Fix model measurements to call 2025-09-09 18:29:56 -07:00
doctorpangloss
537e34358f Improve compatibility with custom nodes that want to support both LTS and vanilla ComfyUI 2025-09-09 17:54:49 -07:00
comfyanonymous
b288fb0db8
Small refactor of some vae code. (#9787) 2025-09-09 18:09:56 -04:00
doctorpangloss
4f6f3e9197 Fix linting issues, improve HooksSupport dummy implementation 2025-09-09 13:26:46 -07:00
doctorpangloss
f5e29f0e61 Fix sage attention for qwen-image, fix Qwen Image VAE memory usage, improve compatibility with hooks when using KSampler based workflows and non-ModelPatcher model manageable-stuff 2025-09-09 12:58:23 -07:00
comfyanonymous
103a12cb66
Support qwen inpaint controlnet. (#9772) 2025-09-08 17:30:26 -04:00
contentis
97652d26b8
Add explicit casting in apply_rope for Qwen VL (#9759) 2025-09-08 15:08:18 -04:00
doctorpangloss
b62d4f05e1 more type documentation, fix installation with dependency group 2025-09-08 11:35:19 -07:00
comfyanonymous
fb763d4333
Fix amd_min_version crash when cpu device. (#9754) 2025-09-07 21:16:29 -04:00
comfyanonymous
bcbd7884e3
Don't enable pytorch attention on AMD if triton isn't available. (#9747) 2025-09-07 00:29:38 -04:00
comfyanonymous
27a0fcccc3
Enable bf16 VAE on RDNA4. (#9746) 2025-09-06 23:25:22 -04:00
comfyanonymous
ea6cdd2631
Print all fast options in --help (#9737) 2025-09-06 01:05:05 -04:00
comfyanonymous
2ee7879a0b
Fix lowvram issues with hunyuan3d 2.1 (#9735) 2025-09-05 14:57:35 -04:00
comfyanonymous
c9ebe70072
Some changes to the previous hunyuan PR. (#9725) 2025-09-04 20:39:02 -04:00
Yousef R. Gamaleldin
261421e218
Add Hunyuan 3D 2.1 Support (#8714) 2025-09-04 20:36:20 -04:00
doctorpangloss
a31e5f216d Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-09-04 14:31:14 -07:00
doctorpangloss
58622c7e91 Move validation back to where it is supposed to be relative to upstream 2025-09-04 14:05:19 -07:00
comfyanonymous
72855db715
Fix potential rope issue. (#9710) 2025-09-03 22:20:13 -04:00
doctorpangloss
6cffd4c4c2 Fix linting issue 2025-09-03 15:08:16 -07:00
doctorpangloss
0e5348b9a9 Fixes issues with running from a vanilla ComfyUI workspace and having this package installed (fixes #30) 2025-09-03 14:57:34 -07:00
doctorpangloss
8052d070de Fix pylint issues 2025-09-03 12:13:35 -07:00
doctorpangloss
179c2d35c8 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-09-03 12:04:32 -07:00
comfyanonymous
e3018c2a5a
uso -> uxo/uno as requested. (#9688) 2025-09-02 16:12:07 -04:00
comfyanonymous
3412d53b1d
USO style reference. (#9677)
Load the projector.safetensors file with the ModelPatchLoader node and use
the siglip_vision_patch14_384.safetensors "clip vision" model and the
USOStyleReferenceNode.
2025-09-02 15:36:22 -04:00
contentis
e2d1e5dad9
Enable Convolution AutoTuning (#9301) 2025-09-01 20:33:50 -04:00
comfyanonymous
27e067ce50
Implement the USO subject identity lora. (#9674)
Use the lora with FluxContextMultiReferenceLatentMethod node set to "uso"
and a ReferenceLatent node with the reference image.
2025-09-01 18:54:02 -04:00
chaObserv
32a627bf1f
SEEDS: update noise decomposition and refactor (#9633)
- Update the decomposition to reflect interval dependency
- Extract phi computations into functions
- Use torch.lerp for interpolation
2025-08-31 00:01:45 -04:00
comfyanonymous
e80a14ad50
Support wan2.2 5B fun control model. (#9611)
Use the Wan22FunControlToVideo node.
2025-08-28 22:13:07 -04:00
comfyanonymous
4aa79dbf2c
Adjust flux mem usage factor a bit. (#9588) 2025-08-27 23:08:17 -04:00
Gangin Park
3aad339b63
Add DPM++ 2M SDE Heun (RES) sampler (#9542) 2025-08-27 19:07:31 -04:00
comfyanonymous
491755325c
Better s2v memory estimation. (#9584) 2025-08-27 19:02:42 -04:00
comfyanonymous
496888fd68
Improve s2v performance when generating videos longer than 120 frames. (#9582) 2025-08-27 16:06:40 -04:00
comfyanonymous
b5ac6ed7ce
Fixes to make controlnet type models work on qwen edit and kontext. (#9581) 2025-08-27 15:26:28 -04:00
Kohaku-Blueleaf
b20ba1f27c
Fix #9537 (#9576) 2025-08-27 12:45:02 -04:00
comfyanonymous
88aee596a3
WIP Wan 2.2 S2V model. (#9568) 2025-08-27 01:10:34 -04:00
doctorpangloss
1b7d04b2e7 Add types for PreviewImageWithMetadataMessage 2025-08-26 17:19:46 -07:00
doctorpangloss
726804ca63 Improve the way the execution context is initialized 2025-08-26 16:55:41 -07:00
doctorpangloss
479e20d233 Fix attention dispatch bug 2025-08-26 16:40:45 -07:00
doctorpangloss
173b1ce0ae Add support for intuitive progress notifications when using comfyui as a library 2025-08-26 16:31:39 -07:00
doctorpangloss
35d890eafe fix logging in extra_config 2025-08-26 15:11:49 -07:00
doctorpangloss
1e938f5feb fix sdpa priorities 2025-08-26 14:33:00 -07:00
doctorpangloss
306fcbaa0e audio_encoders now a package correctly, make imports relative 2025-08-26 14:12:40 -07:00
doctorpangloss
752caf013c fix model upscaler, fix list iteration linting issue, fix samplers 2025-08-26 14:09:07 -07:00
doctorpangloss
443bb45eaf Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-26 13:59:45 -07:00
comfyanonymous
914c2a2973
Implement wav2vec2 as an audio encoder model. (#9549)
This is useless on its own but there are multiple models that use it.
2025-08-25 23:26:47 -04:00
comfyanonymous
41048c69b4
Fix Conditioning masks on 3d latents. (#9506) 2025-08-22 23:15:44 -04:00
Jedrzej Kosinski
fc247150fe
Implement EasyCache and Invent LazyCache (#9496)
* Attempting a universal implementation of EasyCache, starting with flux as test; I screwed up the math a bit, but when I set it just right it works.

* Fixed math to make threshold work as expected, refactored code to use EasyCacheHolder instead of a dict wrapped by object

* Use sigmas from transformer_options instead of timesteps to be compatible with a greater amount of models, make end_percent work

* Make log statement when not skipping useful, preparing for per-cond caching

* Added DIFFUSION_MODEL wrapper around forward function for wan model

* Add subsampling for heuristic inputs

* Add subsampling to output_prev (output_prev_subsampled now)

* Properly consider conds in EasyCache logic

* Created SuperEasyCache to test what happens if caching and reuse is moved outside the scope of conds, added PREDICT_NOISE wrapper to facilitate this test

* Change max reuse_threshold to 3.0

* Mark EasyCache/SuperEasyCache as experimental (beta)

* Make Lumina2 compatible with EasyCache

* Add EasyCache support for Qwen Image

* Fix missing comma, curse you Cursor

* Add EasyCache support to AceStep

* Add EasyCache support to Chroma

* Added EasyCache support to Cosmos Predict t2i

* Make EasyCache not crash with Cosmos Predict ImagToVideo latents, but does not work well at all

* Add EasyCache support to hidream

* Added EasyCache support to hunyuan video

* Added EasyCache support to hunyuan3d

* Added EasyCache support to LTXV (not very good, but does not crash)

* Implemented EasyCache for aura_flow

* Renamed SuperEasyCache to LazyCache, hardcoded subsample_factor to 8 on nodes

* Eatra logging when verbose is true for EasyCache
2025-08-22 22:41:08 -04:00
doctorpangloss
9fcc597e1f Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-22 18:29:55 -07:00
doctorpangloss
e84bf5f025 Fix downloads and tests on Linux 2025-08-22 18:29:12 -07:00
doctorpangloss
7ce781b0fd Fix torch compile 2025-08-22 17:43:12 -07:00
doctorpangloss
735a133ad4 Update to 0.3.51 2025-08-22 17:29:18 -07:00
contentis
fe31ad0276
Add elementwise fusions (#9495)
* Add elementwise fusions

* Add addcmul pattern to Qwen
2025-08-22 19:39:15 -04:00
doctorpangloss
dfc47e0611 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-22 13:24:52 -07:00
comfyanonymous
ff57793659
Support InstantX Qwen controlnet. (#9488) 2025-08-22 00:53:11 -04:00
comfyanonymous
f7bd5e58dd
Make it easier to implement future qwen controlnets. (#9485) 2025-08-21 23:18:04 -04:00
comfyanonymous
0963493a9c
Support for Qwen Diffsynth Controlnets canny and depth. (#9465)
These are not real controlnets but actually a patch on the model so they
will be treated as such.

Put them in the models/model_patches/ folder.

Use the new ModelPatchLoader and QwenImageDiffsynthControlnet nodes.
2025-08-20 22:26:37 -04:00
comfyanonymous
8d38ea3bbf
Fix bf16 precision issue with qwen image embeddings. (#9441) 2025-08-20 02:58:54 -04:00
comfyanonymous
5a8f502db5
Disable prompt weights for qwen. (#9438) 2025-08-20 01:08:11 -04:00
comfyanonymous
7cd2c4bd6a
Qwen rotary embeddings should now match reference code. (#9437) 2025-08-20 00:45:27 -04:00
comfyanonymous
dfa791eb4b
Rope fix for qwen vl. (#9435) 2025-08-19 20:47:42 -04:00
comfyanonymous
4977f203fa
P2 of qwen edit model. (#9412)
* P2 of qwen edit model.

* Typo.

* Fix normal qwen.

* Fix.

* Make the TextEncodeQwenImageEdit also set the ref latent.

If you don't want it to set the ref latent and want to use the
ReferenceLatent node with your custom latent instead just disconnect the
VAE.
2025-08-18 22:38:34 -04:00
Jedrzej Kosinski
7f3b9b16c6
Make step index detection much more robust (#9392) 2025-08-17 18:54:07 -04:00
comfyanonymous
ed43784b0d
WIP Qwen edit model: The diffusion model part. (#9383) 2025-08-17 16:45:39 -04:00
comfyanonymous
0f2b8525bc
Qwen image model refactor. (#9375) 2025-08-16 17:51:28 -04:00
comfyanonymous
1702e6df16
Implement wan2.2 camera model. (#9357)
Use the old WanCameraImageToVideo node.
2025-08-15 17:29:58 -04:00
doctorpangloss
86c64e9d7c fix linting issue 2025-08-15 14:12:50 -07:00
comfyanonymous
c308a8840a
Add FluxKontextMultiReferenceLatentMethod node. (#9356)
This node is only useful if someone trains the kontext model to properly
use multiple reference images via the index method.

The default is the offset method which feeds the multiple images like if
they were stitched together as one. This method works with the current
flux kontext model.
2025-08-15 15:50:39 -04:00
comfyanonymous
e08ecfbd8a
Add warning when using old pytorch. (#9347) 2025-08-15 00:22:26 -04:00
comfyanonymous
4e5c230f6a
Fix last commit not working on older pytorch. (#9346) 2025-08-14 23:44:02 -04:00
Xiangxi Guo (Ryan)
f0d5d0111f
Avoid torch compile graphbreak for older pytorch versions (#9344)
Turns out torch.compile has some gaps in context manager decorator
syntax support. I've sent patches to fix that in PyTorch, but it won't
be available for all the folks running older versions of PyTorch, hence
this trivial patch.
2025-08-14 23:41:37 -04:00
comfyanonymous
ad19a069f6
Make SLG nodes work on Qwen Image model. (#9345) 2025-08-14 23:16:01 -04:00
Benjamin Berman
b3d95afcab easier support for colab 2025-08-14 09:50:46 -07:00
Jedrzej Kosinski
e4f7ea105f
Added context window support to core sampling code (#9238)
* Added initial support for basic context windows - in progress

* Add prepare_sampling wrapper for context window to more accurately estimate latent memory requirements, fixed merging wrappers/callbacks dicts in prepare_model_patcher

* Made context windows compatible with different dimensions; works for WAN, but results are bad

* Fix comfy.patcher_extension.merge_nested_dicts calls in prepare_model_patcher in sampler_helpers.py

* Considering adding some callbacks to context window code to allow extensions of behavior without the need to rewrite code

* Made dim slicing cleaner

* Add Wan Context WIndows node for testing

* Made context schedule and fuse method functions be stored on the handler instead of needing to be registered in core code to be found

* Moved some code around between node_context_windows.py and context_windows.py

* Change manual context window nodes names/ids

* Added callbacks to IndexListContexHandler

* Adjusted default values for context_length and context_overlap, made schema.inputs definition for WAN Context Windows less annoying

* Make get_resized_cond more robust for various dim sizes

* Fix typo

* Another small fix
2025-08-13 21:33:05 -04:00
Simon Lui
c991a5da65
Fix XPU iGPU regressions (#9322)
* Change bf16 check and switch non-blocking to off default with option to force to regain speed on certain classes of iGPUs and refactor xpu check.

* Turn non_blocking off by default for xpu.

* Update README.md for Intel GPUs.
2025-08-13 19:13:35 -04:00
comfyanonymous
9df8792d4b
Make last PR not crash comfy on old pytorch. (#9324) 2025-08-13 15:12:41 -04:00
contentis
3da5a07510
SDPA backend priority (#9299) 2025-08-13 14:53:27 -04:00
comfyanonymous
560d38f34c
Wan2.2 fun control support. (#9292) 2025-08-12 23:26:33 -04:00
PsychoLogicAu
2208aa616d
Support SimpleTuner lycoris lora for Qwen-Image (#9280) 2025-08-11 16:56:16 -04:00
Benjamin Berman
deba203176
Disable retries for hub downloading until xet is updated 2025-08-11 12:12:34 -07:00
Benjamin Berman
027648e398
Fix model name bug with sd1 2025-08-11 12:08:25 -07:00
comfyanonymous
5828607ccf
Not sure if AMD actually support fp16 acc but it doesn't crash. (#9258) 2025-08-09 12:49:25 -04:00
comfyanonymous
735bb4bdb1
Users report gfx1201 is buggy on flux with pytorch attention. (#9244) 2025-08-08 04:21:00 -04:00
doctorpangloss
cd17b42664 Qwen Image with sage attention workarounds 2025-08-07 17:29:23 -07:00
doctorpangloss
b72e5ff448 Qwen Image with 4 bit and 8 bit quants 2025-08-07 13:40:05 -07:00
doctorpangloss
d8dbff9226 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-07 13:23:38 -07:00
doctorpangloss
b42b2322fa Better interaction with huggingface caches 2025-08-07 13:19:20 -07:00
flybirdxx
4c3e57b0ae
Fixed an issue where qwenLora could not be loaded properly. (#9208) 2025-08-06 13:23:11 -04:00
comfyanonymous
d044a24398
Fix default shift and any latent size for qwen image model. (#9186) 2025-08-05 06:12:27 -04:00
comfyanonymous
c012400240
Initial support for qwen image model. (#9179) 2025-08-04 22:53:25 -04:00
doctorpangloss
7ba86929f7 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-04 13:44:45 -07:00
comfyanonymous
03895dea7c
Fix another issue with the PR. (#9170) 2025-08-04 04:33:04 -04:00
comfyanonymous
84f9759424
Add some warnings and prevent crash when cond devices don't match. (#9169) 2025-08-04 04:20:12 -04:00
comfyanonymous
7991341e89
Various fixes for broken things from earlier PR. (#9168) 2025-08-04 04:02:40 -04:00
comfyanonymous
140ffc7fdc
Fix broken controlnet from last PR. (#9167) 2025-08-04 03:28:12 -04:00
comfyanonymous
182f90b5ec
Lower cond vram use by casting at the same time as device transfer. (#9159) 2025-08-04 03:11:53 -04:00
comfyanonymous
aebac22193
Cleanup. (#9160) 2025-08-03 07:08:11 -04:00
comfyanonymous
13aaa66ec2
Make sure context is on the right device. (#9154) 2025-08-02 15:09:23 -04:00
comfyanonymous
5f582a9757
Make sure all the conds are on the right device. (#9151) 2025-08-02 15:00:13 -04:00
doctorpangloss
59bc6afa7b Fixes to tests and progress bars in xet 2025-08-01 17:26:30 -07:00
doctorpangloss
87bed08124 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-01 16:05:47 -07:00
comfyanonymous
1e638a140b
Tiny wan vae optimizations. (#9136) 2025-08-01 05:25:38 -04:00
doctorpangloss
3c9d311dee Improvements to torch.compile
- the model weights generally have to be patched ahead of time for compilation to work
 - the model downloader matches the folder_paths API a bit better
 - tweak the logging from the execution node
2025-07-30 19:27:40 -07:00
doctorpangloss
83184916c1 Improvements to GGUF
- GGUF now works and included 4 bit quants, this will allow WAN to run on 24GB VRAM GPUs
 - logger only shows full stack for errors
 - helper functions for colab notebook
 - fix nvcr.io auth error
 - lora issues are now reported better
 - model downloader will use huggingface cache and symlinking if it is supported on your platform
 - torch compile node now correctly patches the model before compilation
2025-07-30 18:28:52 -07:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity (#9057) 2025-07-30 19:25:56 -04:00
comfyanonymous
da9dab7edd
Small wan camera memory optimization. (#9111) 2025-07-30 05:55:26 -04:00
comfyanonymous
dca6bdd4fa
Make wan2.2 5B i2v take a lot less memory. (#9102) 2025-07-29 19:44:18 -04:00
comfyanonymous
7d593baf91
Extra reserved vram on large cards on windows. (#9093) 2025-07-29 04:07:45 -04:00
doctorpangloss
69a4906964 Experimental GGUF support 2025-07-28 17:02:20 -07:00
doctorpangloss
fe4057385d Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-28 14:38:18 -07:00
doctorpangloss
03e5430121 Improvements for Wan 2.2 support
- add xet support and add the xet cache to manageable directories
 - xet is enabled by default
 - fix logging to root in various places
 - improve logging about model unloading and loading
 - TorchCompileNode now supports the VAE
 - torchaudio missing will cause less noise in the logs
 - feature flags will assume to be supporting everything in the distributed progress context
 - fixes progress notifications
2025-07-28 14:36:27 -07:00
comfyanonymous
c60dc4177c
Remove unecessary clones in the wan2.2 VAE. (#9083) 2025-07-28 14:48:19 -04:00
doctorpangloss
b5a50301f6 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-28 10:06:47 -07:00
comfyanonymous
a88788dce6
Wan 2.2 support. (#9080) 2025-07-28 08:00:23 -04:00
Benjamin Berman
583ddd6b38 Always use spawn context, even on Linux 2025-07-26 17:42:44 -07:00
comfyanonymous
0621d73a9c
Remove useless code. (#9059) 2025-07-26 04:44:19 -04:00
doctorpangloss
a3ae6e74d2 fix tests 2025-07-25 15:01:30 -07:00
doctorpangloss
3684cff31b Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-25 12:48:05 -07:00
comfyanonymous
e6e5d33b35
Remove useless code. (#9041)
This is only needed on old pytorch 2.0 and older.
2025-07-25 04:58:28 -04:00
Eugene Fairley
4293e4da21
Add WAN ATI support (#8874)
* Add WAN ATI support

* Fixes

* Fix length

* Remove extra functions

* Fix

* Fix

* Ruff fix

* Remove torch.no_grad

* Add batch trajectory logic

* Scale inputs before and after motion patch

* Batch image/trajectory

* Ruff fix

* Clean up
2025-07-24 20:59:19 -04:00
comfyanonymous
69cb57b342
Print xpu device name. (#9035) 2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX (#8585)
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
Kohaku-Blueleaf
eb2f78b4e0
[Training Node] algo support, grad acc, optional grad ckpt (#9015)
* Add factorization utils for lokr

* Add lokr train impl

* Add loha train impl

* Add adapter map for algo selection

* Add optional grad ckpt and algo selection

* Update __init__.py

* correct key name for loha

* Use custom fwd/bwd func and better init for loha

* Support gradient accumulation

* Fix bugs of loha

* use more stable init

* Add OFT training

* linting
2025-07-23 20:57:27 -04:00
chaObserv
e729a5cc11
Separate denoised and noise estimation in Euler CFG++ (#9008)
This will change their behavior with the sampling CONST type.
It also combines euler_cfg_pp and euler_ancestral_cfg_pp into one main function.
2025-07-23 19:47:05 -04:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 (#9029) 2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. (#9027) 2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. (#9026) 2025-07-23 18:10:59 -04:00
comfyanonymous
5ad33787de
Add default device argument. (#9023) 2025-07-23 14:20:49 -04:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. (#9004) 2025-07-22 15:20:09 -04:00
doctorpangloss
9fec871dbc fix logger mispelling colab, add readme instructions for running in colab and so called comfyui portable 2025-07-17 16:01:03 -07:00
doctorpangloss
9376830295 fix windows standalone build 2025-07-17 14:18:01 -07:00
doctorpangloss
c2c45e061e fix noisy logging messages 2025-07-17 13:06:55 -07:00
doctorpangloss
d709e10bcb update docker image 2025-07-17 12:45:59 -07:00
doctorpangloss
8fcccad6a2 fix linting error 2025-07-16 15:32:58 -07:00
doctorpangloss
e455b0ad2e fix opentel 2025-07-16 15:19:19 -07:00
doctorpangloss
6ecb9e71f6 this is not disabling the meterprovider successfully 2025-07-16 14:33:06 -07:00
doctorpangloss
8176078118 add configuration args types 2025-07-16 14:26:42 -07:00
doctorpangloss
94310e51e3 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-16 14:24:12 -07:00
doctorpangloss
d4d62500ac fix tests, disable opentel metrics provider 2025-07-16 14:21:50 -07:00
doctorpangloss
f23fe2e2cf updates to dockerfile and pyproject.yaml 2025-07-16 13:46:05 -07:00
comfyanonymous
491fafbd64
Silence clip tokenizer warning. (#8934) 2025-07-16 14:42:07 -04:00
Harel Cain
9bc2798f72
LTXV VAE decoder: switch default padding mode (#8930) 2025-07-16 13:54:38 -04:00
comfyanonymous
50afba747c
Add attempt to work around the safetensors mmap issue. (#8928) 2025-07-16 03:42:17 -04:00
doctorpangloss
bf3345e083 Add support for ComfyUI Manager 2025-07-15 14:06:29 -07:00
doctorpangloss
f1cfff6da5 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-15 10:21:19 -07:00
doctorpangloss
96b4e04315 packaging fixes
- enable user db
 - fix main_pre order everywhere
 - fix absolute to relative imports everywhere
 - async better supported
2025-07-15 10:19:33 -07:00
Yoland Yan
543c24108c
Fix wrong reference bug (#8910) 2025-07-14 20:45:55 -04:00
doctorpangloss
c086c5e005 fix pylint issues 2025-07-14 17:44:43 -07:00
doctorpangloss
499f9be5fa merging upstream with hiddenswitch
- deprecate our preview image fork of the frontend because upstream now has the needed functionality
 - merge the context executor from upstream into ours
2025-07-14 16:55:47 -07:00
doctorpangloss
bd6f28e3bd Move back to comfy_execution directory 2025-07-14 13:48:34 -07:00
doctorpangloss
04e411c32e Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-14 13:45:09 -07:00
comfyanonymous
b40143984c
Add model detection error hint for lora. (#8880) 2025-07-12 03:49:26 -04:00
doctorpangloss
f576f8124a record workflow when there's an error 2025-07-11 13:51:45 -07:00
comfyanonymous
938d3e8216
Remove windows line endings. (#8866) 2025-07-11 02:37:51 -04:00
guill
2b653e8c18
Support for async node functions (#8830)
* Support for async execution functions

This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.

Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.

In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.

* Add the execution model tests to CI

* Add a missing file

It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.

* Add the websocket library for automated tests

* Add additional tests for async error cases

Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.

* Add a feature flags message to reduce bandwidth

We now only send 1 preview message of the latest type the client can
support.

We'll add a console warning when the client fails to send a feature
flags message at some point in the future.

* Add async tests to CI

* Don't actually add new tests in this PR

Will do it in a separate PR

* Resolve unit test in GPU-less runner

* Just remove the tests that GHA can't handle

* Change line endings to UNIX-style

* Avoid loading model_management.py so early

Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.
2025-07-10 14:46:19 -04:00
doctorpangloss
1338e8935f fix metrics exporter messing up health endpoint 2025-07-09 07:15:42 -07:00
doctorpangloss
b268296504 update upstream for flux fixes 2025-07-09 06:28:17 -07:00
chaObserv
aac10ad23a
Add SA-Solver sampler (#8834) 2025-07-08 16:17:06 -04:00
josephrocca
974254218a
Un-hardcode chroma patch_size (#8840) 2025-07-08 15:56:59 -04:00
comfyanonymous
75d327abd5
Remove some useless code. (#8812) 2025-07-06 07:07:39 -04:00
comfyanonymous
ee615ac269
Add warning when loading file unsafely. (#8800) 2025-07-05 14:34:57 -04:00
chaObserv
f41f323c52
Add the denoising step to several samplers (#8780) 2025-07-03 19:20:53 -04:00
City
d9277301d2
Initial code for new SLG node (#8759) 2025-07-02 20:13:43 -04:00
comfyanonymous
111f583e00
Fallback to regular op when fp8 op throws exception. (#8761) 2025-07-02 00:57:13 -04:00
chaObserv
b22e97dcfa
Migrate ER-SDE from VE to VP algorithm and add its sampler node (#8744)
Apply alpha scaling in the algorithm for reverse-time SDE and add custom ER-SDE sampler node for other solver types (SDE, ODE).
2025-07-01 02:38:52 -04:00
comfyanonymous
170c7bb90c
Fix contiguous issue with pytorch nightly. (#8729) 2025-06-29 06:38:40 -04:00
comfyanonymous
396454fa41
Reorder the schedulers so simple is the default one. (#8722) 2025-06-28 18:12:56 -04:00
xufeng
ba9548f756
“--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes (#8592)
* feat: “--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes

* feat: Simplify custom nodes whitelist logic to use consistent code paths
2025-06-28 15:24:02 -04:00
comfyanonymous
c36be0ea09
Fix memory estimation bug with kontext. (#8709) 2025-06-27 17:21:12 -04:00
comfyanonymous
9093301a49
Don't add tiny bit of random noise when VAE encoding. (#8705)
Shouldn't change outputs but might make things a tiny bit more
deterministic.
2025-06-27 14:14:56 -04:00
doctorpangloss
a7aff3565b Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-26 16:57:25 -07:00
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. (#8679) 2025-06-26 11:28:41 -04:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. (#8672) 2025-06-26 03:39:09 -04:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. (#8669) 2025-06-25 19:35:57 -04:00
comfyanonymous
7a13f74220
unet -> diffusion model (#8659) 2025-06-25 04:52:34 -04:00
doctorpangloss
e034d0bb24 fix sampling bug 2025-06-24 16:37:20 -07:00
doctorpangloss
f6339e8115 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-24 12:52:54 -07:00
doctorpangloss
f8eea225d4 modify main.py 2025-06-24 12:30:54 -07:00
doctorpangloss
8838957554 move main.py 2025-06-24 12:30:32 -07:00
doctorpangloss
8eba1733c8 remove main.py 2025-06-24 12:29:47 -07:00
chaObserv
8042eb20c6
Singlestep DPM++ SDE for RF (#8627)
Refactor the algorithm, and apply alpha scaling.
2025-06-24 14:59:09 -04:00
comfyanonymous
1883e70b43
Fix exception when using a noise mask with cosmos predict2. (#8621)
* Fix exception when using a noise mask with cosmos predict2.

* Fix ruff.
2025-06-21 03:30:39 -04:00
comfyanonymous
f7fb193712
Small flux optimization. (#8611) 2025-06-20 05:37:32 -04:00
comfyanonymous
7e9267fa77
Make flux controlnet work with sd3 text enc. (#8599) 2025-06-19 18:50:05 -04:00
comfyanonymous
91d40086db
Fix pytorch warning. (#8593) 2025-06-19 11:04:52 -04:00
Benjamin Berman
8041b1b54d fix ideogram seed, fix polling for results, fix logger 2025-06-18 09:38:20 -07:00
Benjamin Berman
d9ee4137c7 gracefully handle logs when they are never set up (not sure why this is occurring) 2025-06-18 06:14:09 -07:00
Benjamin Berman
f507bec91a fix issue with queue retrieval in distributed environment, fix text progress, fix folder paths being aggressively resolved, fix ideogram seed 2025-06-18 02:43:50 -07:00
doctorpangloss
1d29c97266 fix tests 2025-06-17 18:38:42 -07:00
doctorpangloss
7d1f840636 adding tests 2025-06-17 14:26:47 -07:00
doctorpangloss
8f836ad2ee fix default path 2025-06-17 13:54:57 -07:00
doctorpangloss
cc338a9d3e this alembic file 2025-06-17 13:25:02 -07:00
doctorpangloss
0779bac12c move db and models 2025-06-17 13:21:56 -07:00
doctorpangloss
1d901e32eb Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 13:20:31 -07:00
doctorpangloss
fb7846db50 Merge branch 'master' of github.com:hiddenswitch/ComfyUI 2025-06-17 13:02:48 -07:00
doctorpangloss
7d5a9f17a4 Add more known models 2025-06-17 12:47:48 -07:00
doctorpangloss
adb68f5623 fix logging in model_downloader, remove nf4 flux support since it is broken and unused 2025-06-17 12:14:08 -07:00
doctorpangloss
3d0306b89f more fixes from pylint 2025-06-17 11:36:41 -07:00
doctorpangloss
d79d7a7e08 fix imports and other basic problems 2025-06-17 11:19:48 -07:00
doctorpangloss
e8cf6489d2 modify hook breaker to work 2025-06-17 10:39:51 -07:00
doctorpangloss
cdcbdaecb0 move hook breaker 2025-06-17 10:39:27 -07:00
doctorpangloss
3bf0b01e47 del hook breaker 2025-06-17 10:39:09 -07:00
doctorpangloss
2b293de1b1 move comfyui_version to the right place so it's registered as a move 2025-06-17 10:36:20 -07:00
doctorpangloss
a5cd8fae7e rm __init__.py which has the version 2025-06-17 10:35:45 -07:00
doctorpangloss
82388d51a2 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-17 10:35:10 -07:00
chaObserv
8e81c507d2
Multistep DPM++ SDE samplers for RF (#8541)
Include alpha in sampling and minor refactoring
2025-06-16 14:47:10 -04:00
comfyanonymous
e1c6dc720e
Allow setting min_length with tokenizer_data. (#8547) 2025-06-16 13:43:52 -04:00
comfyanonymous
7ea79ebb9d
Add correct eps to ltxv rmsnorm. (#8542) 2025-06-15 12:21:25 -04:00
comfyanonymous
d6a2137fc3
Support Cosmos predict2 image to video models. (#8535)
Use the CosmosPredict2ImageToVideoLatent node.
2025-06-14 21:37:07 -04:00
chaObserv
53e8d8193c
Generalize SEEDS samplers (#8529)
Restore VP algorithm for RF and refactor noise_coeffs and half-logSNR calculations
2025-06-14 16:58:16 -04:00
comfyanonymous
29596bd53f
Small cosmos attention code refactor. (#8530) 2025-06-14 05:02:05 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. (#8518) 2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. (#8517) 2025-06-13 07:05:23 -04:00
pythongosssss
50c605e957
Add support for sqlite database (#8444)
* Add support for sqlite database

* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. (#8496)
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. (#8495) 2025-06-11 15:09:11 -04:00
Benjamin Berman
cc9a0935a4 fix issue when trying to normalize paths for comparison purposes and the node contains an integer list value 2025-06-11 07:23:57 -07:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. (#8485) 2025-06-10 13:06:24 -04:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 (#8464) 2025-06-08 14:15:34 -04:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. (#8453) 2025-06-07 10:01:15 -04:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 (#8442) 2025-06-06 09:43:27 -04:00
Benjamin Berman
d94b0cce93 better logging 2025-06-05 20:59:36 -07:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. (#8429) 2025-06-05 10:07:17 -04:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. (#8373) 2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. (#8358) 2025-05-31 20:00:20 -04:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma (#8279) 2025-05-30 18:09:54 -04:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348) 2025-05-30 15:41:02 -04:00
comfyanonymous
f2289a1f59
Delete useless file. (#8327) 2025-05-29 08:29:37 -04:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. (#8321) 2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. (#8318) 2025-05-28 18:47:15 -04:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. (#8307) 2025-05-27 15:09:05 -04:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. (#8294) 2025-05-26 16:39:27 -04:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 (#8282) 2025-05-26 04:29:25 -04:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. (#8272) 2025-05-25 05:28:11 -04:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
drhead
30b2eb8a93
create arange on-device (#8255) 2025-05-23 16:15:06 -04:00
comfyanonymous
f85c08df06
Make VACE conditionings stackable. (#8240) 2025-05-22 19:22:26 -04:00
comfyanonymous
87f9130778
Revert "This doesn't seem to be needed on chroma. (#8209)" (#8210)
This reverts commit 7e84bf5373.
2025-05-20 05:39:55 -04:00
comfyanonymous
7e84bf5373
This doesn't seem to be needed on chroma. (#8209) 2025-05-20 05:29:23 -04:00
doctorpangloss
6d8deff056 Switch to uv for packaging 2025-05-19 15:58:08 -07:00
doctorpangloss
a3ad9bdb1a fix legacy kwargs 2025-05-17 16:10:54 -07:00
comfyanonymous
aee2908d03
Remove useless log. (#8166) 2025-05-17 06:27:34 -04:00
comfyanonymous
1c2d45d2b5
Fix typo in last PR. (#8144)
More robust model detection for future proofing.
2025-05-15 19:02:19 -04:00
George0726
c820ef950d
Add Wan-FUN Camera Control models and Add WanCameraImageToVideo node (#8013)
* support wan camera models

* fix by ruff check

* change camera_condition type; make camera_condition optional

* support camera trajectory nodes

* fix camera direction

---------

Co-authored-by: Qirui Sun <sunqr0667@126.com>
2025-05-15 19:00:43 -04:00
Christian Byrne
98ff01e148
Display progress and result URL directly on API nodes (#8102)
* [Luma] Print download URL of successful task result directly on nodes (#177)

[Veo] Print download URL of successful task result directly on nodes (#184)

[Recraft] Print download URL of successful task result directly on nodes (#183)

[Pixverse] Print download URL of successful task result directly on nodes (#182)

[Kling] Print download URL of successful task result directly on nodes (#181)

[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179)

[Docs] Link to docs in `API_NODE` class property type annotation comment (#178)

[Ideogram] Print download URL of successful task result directly on nodes (#176)

[Kling] Print download URL of successful task result directly on nodes (#181)

[Veo] Print download URL of successful task result directly on nodes (#184)

[Recraft] Print download URL of successful task result directly on nodes (#183)

[Pixverse] Print download URL of successful task result directly on nodes (#182)

[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179)

[Docs] Link to docs in `API_NODE` class property type annotation comment (#178)

[Luma] Print download URL of successful task result directly on nodes (#177)

[Ideogram] Print download URL of successful task result directly on nodes (#176)

Show output URL and progress text on Pika nodes (#168)

[BFL] Print download URL of successful task result directly on nodes (#175)

[OpenAI ] Print download URL of successful task result directly on nodes (#174)

* fix ruff errors

* fix 3.10 syntax error
2025-05-14 00:33:18 -04:00
comfyanonymous
4a9014e201
Hunyuan Custom initial untested implementation. (#8101) 2025-05-13 15:53:47 -04:00
comfyanonymous
a814f2e8cc
Fix issue with old pytorch RMSNorm. (#8095) 2025-05-13 07:54:28 -04:00
comfyanonymous
481732a0ed
Support official ACE Step loras. (#8094) 2025-05-13 07:32:16 -04:00
comfyanonymous
640c47e7de
Fix torch warning about deprecated function. (#8075)
Drop support for torch versions below 2.2 on the audio VAEs.
2025-05-12 14:32:01 -04:00
comfyanonymous
577de83ca9
ACE VAE works in fp16. (#8055) 2025-05-11 04:58:00 -04:00
comfyanonymous
d42613686f
Fix issue with fp8 ops on some models. (#8045)
_scaled_mm errors when an input is non contiguous.
2025-05-10 07:52:56 -04:00
Pam
1b3bf0a5da
Fix res_multistep_ancestral sampler (#8030) 2025-05-09 20:14:13 -04:00
blepping
42da274717
Use normal ComfyUI attention in ACE-Steps model (#8023)
* Use normal ComfyUI attention in ACE-Steps model

* Let optimized_attention handle output reshape for ACE
2025-05-09 13:51:02 -04:00
comfyanonymous
8ab15c863c
Add --mmap-torch-files to enable use of mmap when loading ckpt/pt (#8021) 2025-05-09 04:52:47 -04:00
comfyanonymous
a692c3cca4
Make ACE VAE tiling work. (#8004) 2025-05-08 07:25:45 -04:00
comfyanonymous
5d3cc85e13
Make japanese hiragana and katakana characters work with ACE. (#7997) 2025-05-08 03:32:36 -04:00
comfyanonymous
c7c025b8d1
Adjust memory estimation code for ACE VAE. (#7990) 2025-05-08 01:22:23 -04:00
comfyanonymous
fd08e39588
Make torchaudio not a hard requirement. (#7987)
Some platforms can't install it apparently so if it's not there it should
only break models that actually use it.
2025-05-07 21:37:12 -04:00
comfyanonymous
56b6ee6754
Detection code to make ltxv models without config work. (#7986) 2025-05-07 21:28:24 -04:00