Commit Graph

499 Commits

Author SHA1 Message Date
doctorpangloss
443bb45eaf Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-26 13:59:45 -07:00
Jedrzej Kosinski
fc247150fe
Implement EasyCache and Invent LazyCache (#9496)
* Attempting a universal implementation of EasyCache, starting with flux as test; I screwed up the math a bit, but when I set it just right it works.

* Fixed math to make threshold work as expected, refactored code to use EasyCacheHolder instead of a dict wrapped by object

* Use sigmas from transformer_options instead of timesteps to be compatible with a greater amount of models, make end_percent work

* Make log statement when not skipping useful, preparing for per-cond caching

* Added DIFFUSION_MODEL wrapper around forward function for wan model

* Add subsampling for heuristic inputs

* Add subsampling to output_prev (output_prev_subsampled now)

* Properly consider conds in EasyCache logic

* Created SuperEasyCache to test what happens if caching and reuse is moved outside the scope of conds, added PREDICT_NOISE wrapper to facilitate this test

* Change max reuse_threshold to 3.0

* Mark EasyCache/SuperEasyCache as experimental (beta)

* Make Lumina2 compatible with EasyCache

* Add EasyCache support for Qwen Image

* Fix missing comma, curse you Cursor

* Add EasyCache support to AceStep

* Add EasyCache support to Chroma

* Added EasyCache support to Cosmos Predict t2i

* Make EasyCache not crash with Cosmos Predict ImagToVideo latents, but does not work well at all

* Add EasyCache support to hidream

* Added EasyCache support to hunyuan video

* Added EasyCache support to hunyuan3d

* Added EasyCache support to LTXV (not very good, but does not crash)

* Implemented EasyCache for aura_flow

* Renamed SuperEasyCache to LazyCache, hardcoded subsample_factor to 8 on nodes

* Eatra logging when verbose is true for EasyCache
2025-08-22 22:41:08 -04:00
doctorpangloss
9fcc597e1f Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-22 18:29:55 -07:00
doctorpangloss
735a133ad4 Update to 0.3.51 2025-08-22 17:29:18 -07:00
contentis
fe31ad0276
Add elementwise fusions (#9495)
* Add elementwise fusions

* Add addcmul pattern to Qwen
2025-08-22 19:39:15 -04:00
doctorpangloss
dfc47e0611 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-22 13:24:52 -07:00
comfyanonymous
ff57793659
Support InstantX Qwen controlnet. (#9488) 2025-08-22 00:53:11 -04:00
comfyanonymous
f7bd5e58dd
Make it easier to implement future qwen controlnets. (#9485) 2025-08-21 23:18:04 -04:00
comfyanonymous
0963493a9c
Support for Qwen Diffsynth Controlnets canny and depth. (#9465)
These are not real controlnets but actually a patch on the model so they
will be treated as such.

Put them in the models/model_patches/ folder.

Use the new ModelPatchLoader and QwenImageDiffsynthControlnet nodes.
2025-08-20 22:26:37 -04:00
comfyanonymous
8d38ea3bbf
Fix bf16 precision issue with qwen image embeddings. (#9441) 2025-08-20 02:58:54 -04:00
comfyanonymous
7cd2c4bd6a
Qwen rotary embeddings should now match reference code. (#9437) 2025-08-20 00:45:27 -04:00
comfyanonymous
ed43784b0d
WIP Qwen edit model: The diffusion model part. (#9383) 2025-08-17 16:45:39 -04:00
comfyanonymous
0f2b8525bc
Qwen image model refactor. (#9375) 2025-08-16 17:51:28 -04:00
comfyanonymous
1702e6df16
Implement wan2.2 camera model. (#9357)
Use the old WanCameraImageToVideo node.
2025-08-15 17:29:58 -04:00
comfyanonymous
c308a8840a
Add FluxKontextMultiReferenceLatentMethod node. (#9356)
This node is only useful if someone trains the kontext model to properly
use multiple reference images via the index method.

The default is the offset method which feeds the multiple images like if
they were stitched together as one. This method works with the current
flux kontext model.
2025-08-15 15:50:39 -04:00
comfyanonymous
ad19a069f6
Make SLG nodes work on Qwen Image model. (#9345) 2025-08-14 23:16:01 -04:00
comfyanonymous
9df8792d4b
Make last PR not crash comfy on old pytorch. (#9324) 2025-08-13 15:12:41 -04:00
contentis
3da5a07510
SDPA backend priority (#9299) 2025-08-13 14:53:27 -04:00
comfyanonymous
560d38f34c
Wan2.2 fun control support. (#9292) 2025-08-12 23:26:33 -04:00
doctorpangloss
cd17b42664 Qwen Image with sage attention workarounds 2025-08-07 17:29:23 -07:00
doctorpangloss
b72e5ff448 Qwen Image with 4 bit and 8 bit quants 2025-08-07 13:40:05 -07:00
doctorpangloss
d8dbff9226 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-07 13:23:38 -07:00
comfyanonymous
d044a24398
Fix default shift and any latent size for qwen image model. (#9186) 2025-08-05 06:12:27 -04:00
comfyanonymous
c012400240
Initial support for qwen image model. (#9179) 2025-08-04 22:53:25 -04:00
doctorpangloss
87bed08124 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-08-01 16:05:47 -07:00
comfyanonymous
1e638a140b
Tiny wan vae optimizations. (#9136) 2025-08-01 05:25:38 -04:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity (#9057) 2025-07-30 19:25:56 -04:00
comfyanonymous
da9dab7edd
Small wan camera memory optimization. (#9111) 2025-07-30 05:55:26 -04:00
comfyanonymous
dca6bdd4fa
Make wan2.2 5B i2v take a lot less memory. (#9102) 2025-07-29 19:44:18 -04:00
doctorpangloss
fe4057385d Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-28 14:38:18 -07:00
doctorpangloss
03e5430121 Improvements for Wan 2.2 support
- add xet support and add the xet cache to manageable directories
 - xet is enabled by default
 - fix logging to root in various places
 - improve logging about model unloading and loading
 - TorchCompileNode now supports the VAE
 - torchaudio missing will cause less noise in the logs
 - feature flags will assume to be supporting everything in the distributed progress context
 - fixes progress notifications
2025-07-28 14:36:27 -07:00
comfyanonymous
c60dc4177c
Remove unecessary clones in the wan2.2 VAE. (#9083) 2025-07-28 14:48:19 -04:00
doctorpangloss
b5a50301f6 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-28 10:06:47 -07:00
comfyanonymous
a88788dce6
Wan 2.2 support. (#9080) 2025-07-28 08:00:23 -04:00
comfyanonymous
0621d73a9c
Remove useless code. (#9059) 2025-07-26 04:44:19 -04:00
doctorpangloss
3684cff31b Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-25 12:48:05 -07:00
comfyanonymous
e6e5d33b35
Remove useless code. (#9041)
This is only needed on old pytorch 2.0 and older.
2025-07-25 04:58:28 -04:00
doctorpangloss
94310e51e3 Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-16 14:24:12 -07:00
Harel Cain
9bc2798f72
LTXV VAE decoder: switch default padding mode (#8930) 2025-07-16 13:54:38 -04:00
doctorpangloss
bf3345e083 Add support for ComfyUI Manager 2025-07-15 14:06:29 -07:00
doctorpangloss
96b4e04315 packaging fixes
- enable user db
 - fix main_pre order everywhere
 - fix absolute to relative imports everywhere
 - async better supported
2025-07-15 10:19:33 -07:00
doctorpangloss
c086c5e005 fix pylint issues 2025-07-14 17:44:43 -07:00
doctorpangloss
04e411c32e Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-07-14 13:45:09 -07:00
comfyanonymous
938d3e8216
Remove windows line endings. (#8866) 2025-07-11 02:37:51 -04:00
doctorpangloss
b268296504 update upstream for flux fixes 2025-07-09 06:28:17 -07:00
josephrocca
974254218a
Un-hardcode chroma patch_size (#8840) 2025-07-08 15:56:59 -04:00
comfyanonymous
9093301a49
Don't add tiny bit of random noise when VAE encoding. (#8705)
Shouldn't change outputs but might make things a tiny bit more
deterministic.
2025-06-27 14:14:56 -04:00
doctorpangloss
a7aff3565b Merge branch 'master' of github.com:comfyanonymous/ComfyUI 2025-06-26 16:57:25 -07:00
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. (#8679) 2025-06-26 11:28:41 -04:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. (#8669) 2025-06-25 19:35:57 -04:00