Commit Graph

2391 Commits

Author SHA1 Message Date
patientx
fc93a6f534
Merge branch 'comfyanonymous:master' into master 2025-08-28 02:22:15 +03:00
Gangin Park
3aad339b63
Add DPM++ 2M SDE Heun (RES) sampler (#9542) 2025-08-27 19:07:31 -04:00
comfyanonymous
491755325c
Better s2v memory estimation. (#9584) 2025-08-27 19:02:42 -04:00
Christopher Anderson
cf22cbd8d5 Added env_var for cudnn.benchmark 2025-08-28 09:00:08 +10:00
comfyanonymous
496888fd68
Improve s2v performance when generating videos longer than 120 frames. (#9582) 2025-08-27 16:06:40 -04:00
comfyanonymous
b5ac6ed7ce
Fixes to make controlnet type models work on qwen edit and kontext. (#9581) 2025-08-27 15:26:28 -04:00
Kohaku-Blueleaf
b20ba1f27c
Fix #9537 (#9576) 2025-08-27 12:45:02 -04:00
patientx
eeab23fc0b
Merge branch 'comfyanonymous:master' into master 2025-08-27 10:07:57 +03:00
comfyanonymous
88aee596a3
WIP Wan 2.2 S2V model. (#9568) 2025-08-27 01:10:34 -04:00
patientx
c1aef0126d
Merge pull request #276 from sfinktah/sfink-cudnn-benchmark-env
Deleted torch.backends.cudnn.benchmark line, defaults are fine
2025-08-26 19:34:35 +03:00
patientx
1efeba7066
Merge branch 'comfyanonymous:master' into master 2025-08-26 10:41:38 +03:00
comfyanonymous
914c2a2973
Implement wav2vec2 as an audio encoder model. (#9549)
This is useless on its own but there are multiple models that use it.
2025-08-25 23:26:47 -04:00
Christopher Anderson
110cb0a9d9 Deleted torch.backends.cudnn.benchmark line, defaults are fine 2025-08-26 08:43:31 +10:00
Christopher Anderson
1b9a3b12c2 had to move cudnn disablement up much higher 2025-08-25 14:11:54 +10:00
Christopher Anderson
cd3d60254b argggh, white space hell 2025-08-25 09:52:58 +10:00
Christopher Anderson
184fa5921f worst PR ever, really. 2025-08-25 09:42:27 +10:00
Christopher Anderson
33c43b68c3 worst PR ever 2025-08-25 09:38:22 +10:00
Christopher Anderson
2a06dc8e87 Merge remote-tracking branch 'origin/sfink-cudnn-env' into sfink-cudnn-env
# Conflicts:
#	comfy/customzluda/zluda.py
2025-08-25 09:34:32 +10:00
Christopher Anderson
3504eeeb4a rebased onto upstream master (woops) 2025-08-25 09:32:34 +10:00
Christopher Anderson
7eda4587be Added env var TORCH_BACKENDS_CUDNN_ENABLED, defaults to 1. 2025-08-25 09:31:12 +10:00
Christopher Anderson
954644ef83 Added env var TORCH_BACKENDS_CUDNN_ENABLED, defaults to 1. 2025-08-25 08:56:48 +10:00
Rando717
053a6b95e5
Update zluda.py (MEM_BUS_WIDTH)
Added more cards, mostly RDNA(1) and Radeon Pro.

Reasoning: Every time zluda.py gets update I have to manually add 256 for my RX 5700, otherwise it default to 128. Also, manual local edits fails at git pull.
2025-08-24 18:39:40 +02:00
patientx
c92a07594b
Update zluda.py 2025-08-24 12:01:20 +03:00
patientx
dba9d20791
Update zluda.py 2025-08-24 10:23:30 +03:00
patientx
cdc04b5a8a
Merge branch 'comfyanonymous:master' into master 2025-08-23 07:47:07 +03:00
comfyanonymous
41048c69b4
Fix Conditioning masks on 3d latents. (#9506) 2025-08-22 23:15:44 -04:00
Jedrzej Kosinski
fc247150fe
Implement EasyCache and Invent LazyCache (#9496)
* Attempting a universal implementation of EasyCache, starting with flux as test; I screwed up the math a bit, but when I set it just right it works.

* Fixed math to make threshold work as expected, refactored code to use EasyCacheHolder instead of a dict wrapped by object

* Use sigmas from transformer_options instead of timesteps to be compatible with a greater amount of models, make end_percent work

* Make log statement when not skipping useful, preparing for per-cond caching

* Added DIFFUSION_MODEL wrapper around forward function for wan model

* Add subsampling for heuristic inputs

* Add subsampling to output_prev (output_prev_subsampled now)

* Properly consider conds in EasyCache logic

* Created SuperEasyCache to test what happens if caching and reuse is moved outside the scope of conds, added PREDICT_NOISE wrapper to facilitate this test

* Change max reuse_threshold to 3.0

* Mark EasyCache/SuperEasyCache as experimental (beta)

* Make Lumina2 compatible with EasyCache

* Add EasyCache support for Qwen Image

* Fix missing comma, curse you Cursor

* Add EasyCache support to AceStep

* Add EasyCache support to Chroma

* Added EasyCache support to Cosmos Predict t2i

* Make EasyCache not crash with Cosmos Predict ImagToVideo latents, but does not work well at all

* Add EasyCache support to hidream

* Added EasyCache support to hunyuan video

* Added EasyCache support to hunyuan3d

* Added EasyCache support to LTXV (not very good, but does not crash)

* Implemented EasyCache for aura_flow

* Renamed SuperEasyCache to LazyCache, hardcoded subsample_factor to 8 on nodes

* Eatra logging when verbose is true for EasyCache
2025-08-22 22:41:08 -04:00
contentis
fe31ad0276
Add elementwise fusions (#9495)
* Add elementwise fusions

* Add addcmul pattern to Qwen
2025-08-22 19:39:15 -04:00
patientx
7bc46389fa
Merge branch 'comfyanonymous:master' into master 2025-08-22 10:52:52 +03:00
comfyanonymous
ff57793659
Support InstantX Qwen controlnet. (#9488) 2025-08-22 00:53:11 -04:00
comfyanonymous
f7bd5e58dd
Make it easier to implement future qwen controlnets. (#9485) 2025-08-21 23:18:04 -04:00
patientx
7ff01ded58
Merge branch 'comfyanonymous:master' into master 2025-08-21 09:24:26 +03:00
comfyanonymous
0963493a9c
Support for Qwen Diffsynth Controlnets canny and depth. (#9465)
These are not real controlnets but actually a patch on the model so they
will be treated as such.

Put them in the models/model_patches/ folder.

Use the new ModelPatchLoader and QwenImageDiffsynthControlnet nodes.
2025-08-20 22:26:37 -04:00
patientx
6dca25e2a8
Merge branch 'comfyanonymous:master' into master 2025-08-20 10:14:34 +03:00
comfyanonymous
8d38ea3bbf
Fix bf16 precision issue with qwen image embeddings. (#9441) 2025-08-20 02:58:54 -04:00
comfyanonymous
5a8f502db5
Disable prompt weights for qwen. (#9438) 2025-08-20 01:08:11 -04:00
comfyanonymous
7cd2c4bd6a
Qwen rotary embeddings should now match reference code. (#9437) 2025-08-20 00:45:27 -04:00
comfyanonymous
dfa791eb4b
Rope fix for qwen vl. (#9435) 2025-08-19 20:47:42 -04:00
patientx
1cbb5fdc14
Merge branch 'comfyanonymous:master' into master 2025-08-19 10:21:12 +03:00
comfyanonymous
4977f203fa
P2 of qwen edit model. (#9412)
* P2 of qwen edit model.

* Typo.

* Fix normal qwen.

* Fix.

* Make the TextEncodeQwenImageEdit also set the ref latent.

If you don't want it to set the ref latent and want to use the
ReferenceLatent node with your custom latent instead just disconnect the
VAE.
2025-08-18 22:38:34 -04:00
patientx
3f09b4dba5
Merge branch 'comfyanonymous:master' into master 2025-08-18 15:14:34 +03:00
Jedrzej Kosinski
7f3b9b16c6
Make step index detection much more robust (#9392) 2025-08-17 18:54:07 -04:00
comfyanonymous
ed43784b0d
WIP Qwen edit model: The diffusion model part. (#9383) 2025-08-17 16:45:39 -04:00
patientx
64d6cf045e
Merge branch 'comfyanonymous:master' into master 2025-08-17 11:29:13 +03:00
comfyanonymous
0f2b8525bc
Qwen image model refactor. (#9375) 2025-08-16 17:51:28 -04:00
patientx
5a21015adb
Merge branch 'comfyanonymous:master' into master 2025-08-16 09:54:01 +03:00
comfyanonymous
1702e6df16
Implement wan2.2 camera model. (#9357)
Use the old WanCameraImageToVideo node.
2025-08-15 17:29:58 -04:00
patientx
eb283b5fd7
Merge branch 'comfyanonymous:master' into master 2025-08-16 00:26:31 +03:00
comfyanonymous
c308a8840a
Add FluxKontextMultiReferenceLatentMethod node. (#9356)
This node is only useful if someone trains the kontext model to properly
use multiple reference images via the index method.

The default is the offset method which feeds the multiple images like if
they were stitched together as one. This method works with the current
flux kontext model.
2025-08-15 15:50:39 -04:00
patientx
13f5f9d78f
Merge branch 'comfyanonymous:master' into master 2025-08-15 10:54:10 +03:00
comfyanonymous
e08ecfbd8a
Add warning when using old pytorch. (#9347) 2025-08-15 00:22:26 -04:00
comfyanonymous
4e5c230f6a
Fix last commit not working on older pytorch. (#9346) 2025-08-14 23:44:02 -04:00
Xiangxi Guo (Ryan)
f0d5d0111f
Avoid torch compile graphbreak for older pytorch versions (#9344)
Turns out torch.compile has some gaps in context manager decorator
syntax support. I've sent patches to fix that in PyTorch, but it won't
be available for all the folks running older versions of PyTorch, hence
this trivial patch.
2025-08-14 23:41:37 -04:00
comfyanonymous
ad19a069f6
Make SLG nodes work on Qwen Image model. (#9345) 2025-08-14 23:16:01 -04:00
patientx
a927fbd99b
Merge branch 'comfyanonymous:master' into master 2025-08-14 12:16:50 +03:00
Jedrzej Kosinski
e4f7ea105f
Added context window support to core sampling code (#9238)
* Added initial support for basic context windows - in progress

* Add prepare_sampling wrapper for context window to more accurately estimate latent memory requirements, fixed merging wrappers/callbacks dicts in prepare_model_patcher

* Made context windows compatible with different dimensions; works for WAN, but results are bad

* Fix comfy.patcher_extension.merge_nested_dicts calls in prepare_model_patcher in sampler_helpers.py

* Considering adding some callbacks to context window code to allow extensions of behavior without the need to rewrite code

* Made dim slicing cleaner

* Add Wan Context WIndows node for testing

* Made context schedule and fuse method functions be stored on the handler instead of needing to be registered in core code to be found

* Moved some code around between node_context_windows.py and context_windows.py

* Change manual context window nodes names/ids

* Added callbacks to IndexListContexHandler

* Adjusted default values for context_length and context_overlap, made schema.inputs definition for WAN Context Windows less annoying

* Make get_resized_cond more robust for various dim sizes

* Fix typo

* Another small fix
2025-08-13 21:33:05 -04:00
Simon Lui
c991a5da65
Fix XPU iGPU regressions (#9322)
* Change bf16 check and switch non-blocking to off default with option to force to regain speed on certain classes of iGPUs and refactor xpu check.

* Turn non_blocking off by default for xpu.

* Update README.md for Intel GPUs.
2025-08-13 19:13:35 -04:00
patientx
804c7097fa
Merge branch 'comfyanonymous:master' into master 2025-08-13 23:56:43 +03:00
comfyanonymous
9df8792d4b
Make last PR not crash comfy on old pytorch. (#9324) 2025-08-13 15:12:41 -04:00
contentis
3da5a07510
SDPA backend priority (#9299) 2025-08-13 14:53:27 -04:00
patientx
bcafc3f7a3
Merge branch 'comfyanonymous:master' into master 2025-08-13 10:36:20 +03:00
comfyanonymous
560d38f34c
Wan2.2 fun control support. (#9292) 2025-08-12 23:26:33 -04:00
patientx
f80a9bb674
Merge branch 'comfyanonymous:master' into master 2025-08-12 00:33:53 +03:00
PsychoLogicAu
2208aa616d
Support SimpleTuner lycoris lora for Qwen-Image (#9280) 2025-08-11 16:56:16 -04:00
patientx
c2686a3968
Merge branch 'comfyanonymous:master' into master 2025-08-10 12:09:19 +03:00
comfyanonymous
5828607ccf
Not sure if AMD actually support fp16 acc but it doesn't crash. (#9258) 2025-08-09 12:49:25 -04:00
patientx
89499c6fae
Merge branch 'comfyanonymous:master' into master 2025-08-08 11:40:07 +03:00
comfyanonymous
735bb4bdb1
Users report gfx1201 is buggy on flux with pytorch attention. (#9244) 2025-08-08 04:21:00 -04:00
patientx
8795ae98aa
Merge branch 'comfyanonymous:master' into master 2025-08-06 20:24:47 +03:00
flybirdxx
4c3e57b0ae
Fixed an issue where qwenLora could not be loaded properly. (#9208) 2025-08-06 13:23:11 -04:00
patientx
2e39e0999f
Update zluda.py 2025-08-05 19:21:20 +03:00
patientx
28957a7bd6
Merge branch 'comfyanonymous:master' into master 2025-08-05 13:37:09 +03:00
comfyanonymous
d044a24398
Fix default shift and any latent size for qwen image model. (#9186) 2025-08-05 06:12:27 -04:00
patientx
e419bade03
Merge pull request #244 from sfinktah/sfink-zluda-is-nasty
Bad ideas from zluda update.
2025-08-05 09:48:53 +03:00
patientx
ea8122f065
Merge branch 'comfyanonymous:master' into master 2025-08-05 09:47:31 +03:00
comfyanonymous
c012400240
Initial support for qwen image model. (#9179) 2025-08-04 22:53:25 -04:00
Christopher Anderson
4f853403fe Bad ideas from zluda update. 2025-08-05 06:00:55 +10:00
patientx
88b7fe87ff
Merge branch 'comfyanonymous:master' into master 2025-08-04 12:38:56 +03:00
comfyanonymous
03895dea7c
Fix another issue with the PR. (#9170) 2025-08-04 04:33:04 -04:00
comfyanonymous
84f9759424
Add some warnings and prevent crash when cond devices don't match. (#9169) 2025-08-04 04:20:12 -04:00
comfyanonymous
7991341e89
Various fixes for broken things from earlier PR. (#9168) 2025-08-04 04:02:40 -04:00
patientx
37415c40c1
device identification and setting triton arch override 2025-08-04 10:44:18 +03:00
patientx
d823c0c615
Merge branch 'comfyanonymous:master' into master 2025-08-04 10:42:15 +03:00
comfyanonymous
140ffc7fdc
Fix broken controlnet from last PR. (#9167) 2025-08-04 03:28:12 -04:00
comfyanonymous
182f90b5ec
Lower cond vram use by casting at the same time as device transfer. (#9159) 2025-08-04 03:11:53 -04:00
patientx
7258461c23
Merge branch 'comfyanonymous:master' into master 2025-08-03 16:33:54 +03:00
comfyanonymous
aebac22193
Cleanup. (#9160) 2025-08-03 07:08:11 -04:00
patientx
da4fc8189a
Merge branch 'comfyanonymous:master' into master 2025-08-03 00:17:56 +03:00
comfyanonymous
13aaa66ec2
Make sure context is on the right device. (#9154) 2025-08-02 15:09:23 -04:00
comfyanonymous
5f582a9757
Make sure all the conds are on the right device. (#9151) 2025-08-02 15:00:13 -04:00
patientx
83dbd68651
Merge branch 'comfyanonymous:master' into master 2025-08-01 14:42:25 +03:00
comfyanonymous
1e638a140b
Tiny wan vae optimizations. (#9136) 2025-08-01 05:25:38 -04:00
patientx
321d683af0
Merge branch 'comfyanonymous:master' into master 2025-07-31 14:49:33 +03:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity (#9057) 2025-07-30 19:25:56 -04:00
comfyanonymous
da9dab7edd
Small wan camera memory optimization. (#9111) 2025-07-30 05:55:26 -04:00
patientx
1bd4b6489e
Merge branch 'comfyanonymous:master' into master 2025-07-30 11:11:46 +03:00
comfyanonymous
dca6bdd4fa
Make wan2.2 5B i2v take a lot less memory. (#9102) 2025-07-29 19:44:18 -04:00
patientx
d8ca8134c3
Merge branch 'comfyanonymous:master' into master 2025-07-29 11:56:59 +03:00
comfyanonymous
7d593baf91
Extra reserved vram on large cards on windows. (#9093) 2025-07-29 04:07:45 -04:00
patientx
fc4e82537c
Merge pull request #233 from sfinktah/sfink-flash-attn-gfx-startswith
This will allow much better support for gfx1032 and other things not …
2025-07-28 23:12:38 +03:00
patientx
7ba2a8d3b0
Merge branch 'comfyanonymous:master' into master 2025-07-28 22:15:10 +03:00
comfyanonymous
c60dc4177c
Remove unecessary clones in the wan2.2 VAE. (#9083) 2025-07-28 14:48:19 -04:00
Christopher Anderson
b5ede18481 This will allow much better support for gfx1032 and other things not specifically named 2025-07-29 04:21:45 +10:00
patientx
769ab3bd25
Merge branch 'comfyanonymous:master' into master 2025-07-28 15:21:30 +03:00
comfyanonymous
a88788dce6
Wan 2.2 support. (#9080) 2025-07-28 08:00:23 -04:00
patientx
5a45e12b61
Merge branch 'comfyanonymous:master' into master 2025-07-26 14:09:19 +03:00
comfyanonymous
0621d73a9c
Remove useless code. (#9059) 2025-07-26 04:44:19 -04:00
comfyanonymous
e6e5d33b35
Remove useless code. (#9041)
This is only needed on old pytorch 2.0 and older.
2025-07-25 04:58:28 -04:00
patientx
c3bf1d95e2
Merge branch 'comfyanonymous:master' into master 2025-07-25 10:20:29 +03:00
Eugene Fairley
4293e4da21
Add WAN ATI support (#8874)
* Add WAN ATI support

* Fixes

* Fix length

* Remove extra functions

* Fix

* Fix

* Ruff fix

* Remove torch.no_grad

* Add batch trajectory logic

* Scale inputs before and after motion patch

* Batch image/trajectory

* Ruff fix

* Clean up
2025-07-24 20:59:19 -04:00
patientx
970b7fb84f
Merge branch 'comfyanonymous:master' into master 2025-07-24 22:30:55 +03:00
comfyanonymous
69cb57b342
Print xpu device name. (#9035) 2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX (#8585)
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
patientx
30539d0d13
Merge branch 'comfyanonymous:master' into master 2025-07-24 13:59:09 +03:00
Kohaku-Blueleaf
eb2f78b4e0
[Training Node] algo support, grad acc, optional grad ckpt (#9015)
* Add factorization utils for lokr

* Add lokr train impl

* Add loha train impl

* Add adapter map for algo selection

* Add optional grad ckpt and algo selection

* Update __init__.py

* correct key name for loha

* Use custom fwd/bwd func and better init for loha

* Support gradient accumulation

* Fix bugs of loha

* use more stable init

* Add OFT training

* linting
2025-07-23 20:57:27 -04:00
chaObserv
e729a5cc11
Separate denoised and noise estimation in Euler CFG++ (#9008)
This will change their behavior with the sampling CONST type.
It also combines euler_cfg_pp and euler_ancestral_cfg_pp into one main function.
2025-07-23 19:47:05 -04:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 (#9029) 2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. (#9027) 2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. (#9026) 2025-07-23 18:10:59 -04:00
patientx
58f3250106
Merge branch 'comfyanonymous:master' into master 2025-07-23 23:49:46 +03:00
comfyanonymous
5ad33787de
Add default device argument. (#9023) 2025-07-23 14:20:49 -04:00
patientx
bd33a5d382
Merge branch 'comfyanonymous:master' into master 2025-07-23 03:27:52 +03:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. (#9004) 2025-07-22 15:20:09 -04:00
patientx
b049c1df82
Merge branch 'comfyanonymous:master' into master 2025-07-17 00:49:42 +03:00
comfyanonymous
491fafbd64
Silence clip tokenizer warning. (#8934) 2025-07-16 14:42:07 -04:00
Harel Cain
9bc2798f72
LTXV VAE decoder: switch default padding mode (#8930) 2025-07-16 13:54:38 -04:00
patientx
d22d65cc68
Merge branch 'comfyanonymous:master' into master 2025-07-16 13:58:56 +03:00
comfyanonymous
50afba747c
Add attempt to work around the safetensors mmap issue. (#8928) 2025-07-16 03:42:17 -04:00
patientx
79e3b67425
Merge branch 'comfyanonymous:master' into master 2025-07-15 12:24:08 +03:00
Yoland Yan
543c24108c
Fix wrong reference bug (#8910) 2025-07-14 20:45:55 -04:00
patientx
3845c2ff7a
Merge branch 'comfyanonymous:master' into master 2025-07-12 14:59:05 +03:00
comfyanonymous
b40143984c
Add model detection error hint for lora. (#8880) 2025-07-12 03:49:26 -04:00
patientx
5ede75293f
Merge branch 'comfyanonymous:master' into master 2025-07-11 17:30:21 +03:00
comfyanonymous
938d3e8216
Remove windows line endings. (#8866) 2025-07-11 02:37:51 -04:00
patientx
43514805ed
Merge branch 'comfyanonymous:master' into master 2025-07-10 21:46:22 +03:00
guill
2b653e8c18
Support for async node functions (#8830)
* Support for async execution functions

This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.

Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.

In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.

* Add the execution model tests to CI

* Add a missing file

It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.

* Add the websocket library for automated tests

* Add additional tests for async error cases

Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.

* Add a feature flags message to reduce bandwidth

We now only send 1 preview message of the latest type the client can
support.

We'll add a console warning when the client fails to send a feature
flags message at some point in the future.

* Add async tests to CI

* Don't actually add new tests in this PR

Will do it in a separate PR

* Resolve unit test in GPU-less runner

* Just remove the tests that GHA can't handle

* Change line endings to UNIX-style

* Avoid loading model_management.py so early

Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.
2025-07-10 14:46:19 -04:00
patientx
b621197729
Merge branch 'comfyanonymous:master' into master 2025-07-09 00:16:45 +03:00
chaObserv
aac10ad23a
Add SA-Solver sampler (#8834) 2025-07-08 16:17:06 -04:00
josephrocca
974254218a
Un-hardcode chroma patch_size (#8840) 2025-07-08 15:56:59 -04:00
patientx
ab04a1c165
Merge branch 'comfyanonymous:master' into master 2025-07-06 14:41:03 +03:00
comfyanonymous
75d327abd5
Remove some useless code. (#8812) 2025-07-06 07:07:39 -04:00
patientx
65db0a046f
Merge branch 'comfyanonymous:master' into master 2025-07-06 02:44:08 +03:00
comfyanonymous
ee615ac269
Add warning when loading file unsafely. (#8800) 2025-07-05 14:34:57 -04:00
patientx
94464d7867
Merge branch 'comfyanonymous:master' into master 2025-07-04 11:03:58 +03:00
chaObserv
f41f323c52
Add the denoising step to several samplers (#8780) 2025-07-03 19:20:53 -04:00
patientx
455fc30fd8
Merge branch 'comfyanonymous:master' into master 2025-07-03 08:08:09 +03:00
City
d9277301d2
Initial code for new SLG node (#8759) 2025-07-02 20:13:43 -04:00
patientx
ac99b100ef
Merge branch 'comfyanonymous:master' into master 2025-07-02 12:50:51 +03:00
comfyanonymous
111f583e00
Fallback to regular op when fp8 op throws exception. (#8761) 2025-07-02 00:57:13 -04:00
patientx
fa03718ba9
Merge branch 'comfyanonymous:master' into master 2025-07-01 14:19:30 +03:00
chaObserv
b22e97dcfa
Migrate ER-SDE from VE to VP algorithm and add its sampler node (#8744)
Apply alpha scaling in the algorithm for reverse-time SDE and add custom ER-SDE sampler node for other solver types (SDE, ODE).
2025-07-01 02:38:52 -04:00
patientx
b093eef244
Merge branch 'comfyanonymous:master' into master 2025-06-29 15:26:39 +03:00
comfyanonymous
170c7bb90c
Fix contiguous issue with pytorch nightly. (#8729) 2025-06-29 06:38:40 -04:00
patientx
82eaaa0d10
Merge branch 'comfyanonymous:master' into master 2025-06-29 12:45:29 +03:00
comfyanonymous
396454fa41
Reorder the schedulers so simple is the default one. (#8722) 2025-06-28 18:12:56 -04:00
patientx
95625f5bde
Merge branch 'comfyanonymous:master' into master 2025-06-28 22:39:52 +03:00
xufeng
ba9548f756
“--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes (#8592)
* feat: “--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes

* feat: Simplify custom nodes whitelist logic to use consistent code paths
2025-06-28 15:24:02 -04:00
patientx
14864643e6
Merge branch 'comfyanonymous:master' into master 2025-06-28 01:17:17 +03:00
comfyanonymous
c36be0ea09
Fix memory estimation bug with kontext. (#8709) 2025-06-27 17:21:12 -04:00
patientx
4ab5a4bb6b
Merge branch 'comfyanonymous:master' into master 2025-06-27 21:54:55 +03:00
comfyanonymous
9093301a49
Don't add tiny bit of random noise when VAE encoding. (#8705)
Shouldn't change outputs but might make things a tiny bit more
deterministic.
2025-06-27 14:14:56 -04:00
patientx
76486ca342
Merge branch 'comfyanonymous:master' into master 2025-06-26 18:53:44 +03:00
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. (#8679) 2025-06-26 11:28:41 -04:00
patientx
c1ce148f41
Merge branch 'comfyanonymous:master' into master 2025-06-26 11:34:14 +03:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. (#8672) 2025-06-26 03:39:09 -04:00
patientx
7db0737e86
Merge branch 'comfyanonymous:master' into master 2025-06-26 02:41:07 +03:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. (#8669) 2025-06-25 19:35:57 -04:00
patientx
8ad7604b12
Add files via upload 2025-06-26 01:45:32 +03:00
patientx
65ff46f01e
Merge branch 'comfyanonymous:master' into master 2025-06-25 12:14:05 +03:00
comfyanonymous
7a13f74220
unet -> diffusion model (#8659) 2025-06-25 04:52:34 -04:00
patientx
71a47f608a
Merge branch 'comfyanonymous:master' into master 2025-06-24 23:49:15 +03:00
chaObserv
8042eb20c6
Singlestep DPM++ SDE for RF (#8627)
Refactor the algorithm, and apply alpha scaling.
2025-06-24 14:59:09 -04:00
patientx
2bf05370ad
Merge branch 'comfyanonymous:master' into master 2025-06-21 14:54:22 +03:00
comfyanonymous
1883e70b43
Fix exception when using a noise mask with cosmos predict2. (#8621)
* Fix exception when using a noise mask with cosmos predict2.

* Fix ruff.
2025-06-21 03:30:39 -04:00
patientx
0e967d11b1
Merge branch 'comfyanonymous:master' into master 2025-06-20 16:18:48 +03:00
comfyanonymous
f7fb193712
Small flux optimization. (#8611) 2025-06-20 05:37:32 -04:00
patientx
73dfb38f5b
Merge branch 'comfyanonymous:master' into master 2025-06-20 09:44:52 +03:00
comfyanonymous
7e9267fa77
Make flux controlnet work with sd3 text enc. (#8599) 2025-06-19 18:50:05 -04:00
patientx
c46cc59288
Merge branch 'comfyanonymous:master' into master 2025-06-19 19:02:07 +03:00
comfyanonymous
91d40086db
Fix pytorch warning. (#8593) 2025-06-19 11:04:52 -04:00
patientx
e34ff57933
Add files via upload 2025-06-17 00:09:40 +03:00
patientx
4339cbea46
Merge branch 'comfyanonymous:master' into master 2025-06-16 22:21:35 +03:00
chaObserv
8e81c507d2
Multistep DPM++ SDE samplers for RF (#8541)
Include alpha in sampling and minor refactoring
2025-06-16 14:47:10 -04:00
comfyanonymous
e1c6dc720e
Allow setting min_length with tokenizer_data. (#8547) 2025-06-16 13:43:52 -04:00
patientx
e8327ffc3d
Merge branch 'comfyanonymous:master' into master 2025-06-15 21:42:40 +03:00
comfyanonymous
7ea79ebb9d
Add correct eps to ltxv rmsnorm. (#8542) 2025-06-15 12:21:25 -04:00
patientx
031f2f5120
Merge branch 'comfyanonymous:master' into master 2025-06-15 16:23:38 +03:00
comfyanonymous
d6a2137fc3
Support Cosmos predict2 image to video models. (#8535)
Use the CosmosPredict2ImageToVideoLatent node.
2025-06-14 21:37:07 -04:00
patientx
132d593223
Merge branch 'comfyanonymous:master' into master 2025-06-15 03:01:50 +03:00
chaObserv
53e8d8193c
Generalize SEEDS samplers (#8529)
Restore VP algorithm for RF and refactor noise_coeffs and half-logSNR calculations
2025-06-14 16:58:16 -04:00
patientx
8909c12ea9
Update model_management.py 2025-06-14 13:27:49 +03:00
patientx
8efd441de9
Merge branch 'comfyanonymous:master' into master 2025-06-14 12:43:19 +03:00
comfyanonymous
29596bd53f
Small cosmos attention code refactor. (#8530) 2025-06-14 05:02:05 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
patientx
e3720cd495
Merge branch 'comfyanonymous:master' into master 2025-06-13 15:04:05 +03:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. (#8518) 2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. (#8517) 2025-06-13 07:05:23 -04:00
patientx
a5e9b6729c
Update zluda.py 2025-06-13 01:40:34 +03:00
patientx
7bd5bcd135
Update zluda-default.py 2025-06-13 01:40:11 +03:00
patientx
c06a15d8a5
Update zluda.py 2025-06-13 01:03:45 +03:00
patientx
896bda9003
Update zluda-default.py 2025-06-13 01:03:14 +03:00
patientx
7d5f4074b6
Add files via upload 2025-06-12 16:10:32 +03:00
patientx
79dda39260
Update zluda.py 2025-06-12 13:20:24 +03:00
patientx
08784dc90d
Update zluda.py 2025-06-12 13:19:59 +03:00
patientx
11af025690
Update zluda.py 2025-06-12 13:11:03 +03:00
patientx
828b7636d0
Update zluda.py 2025-06-12 13:10:40 +03:00
patientx
f53791d5d2
Merge branch 'comfyanonymous:master' into master 2025-06-12 00:32:55 +03:00
pythongosssss
50c605e957
Add support for sqlite database (#8444)
* Add support for sqlite database

* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. (#8496)
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. (#8495) 2025-06-11 15:09:11 -04:00
patientx
06ac233007
Merge branch 'comfyanonymous:master' into master 2025-06-10 20:34:42 +03:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. (#8485) 2025-06-10 13:06:24 -04:00
patientx
4bc3866c67
Merge branch 'comfyanonymous:master' into master 2025-06-09 21:10:00 +03:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
patientx
b4d015f5f3
Merge branch 'comfyanonymous:master' into master 2025-06-08 21:21:41 +03:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 (#8464) 2025-06-08 14:15:34 -04:00
patientx
156aedd995
Merge branch 'comfyanonymous:master' into master 2025-06-07 19:30:45 +03:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. (#8453) 2025-06-07 10:01:15 -04:00
patientx
d28b4525b3
Merge branch 'comfyanonymous:master' into master 2025-06-06 17:10:34 +03:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 (#8442) 2025-06-06 09:43:27 -04:00
patientx
67fc8e3325
Merge branch 'comfyanonymous:master' into master 2025-06-06 01:42:57 +03:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. (#8429) 2025-06-05 10:07:17 -04:00
patientx
9aeff135b2
Update zluda.py 2025-06-02 02:55:19 +03:00
patientx
803f82189a
Merge branch 'comfyanonymous:master' into master 2025-06-01 17:44:48 +03:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. (#8373) 2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. (#8358) 2025-05-31 20:00:20 -04:00
patientx
d74ffb792a
Merge branch 'comfyanonymous:master' into master 2025-05-31 01:55:42 +03:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma (#8279) 2025-05-30 18:09:54 -04:00
patientx
07b8d211e6
Merge branch 'comfyanonymous:master' into master 2025-05-30 23:48:15 +03:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348) 2025-05-30 15:41:02 -04:00
patientx
c74742444d
Merge branch 'comfyanonymous:master' into master 2025-05-29 18:29:06 +03:00
comfyanonymous
f2289a1f59
Delete useless file. (#8327) 2025-05-29 08:29:37 -04:00
patientx
46a997fb23
Merge branch 'comfyanonymous:master' into master 2025-05-29 10:56:01 +03:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. (#8321) 2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. (#8318) 2025-05-28 18:47:15 -04:00
patientx
5b5165371e
Merge branch 'comfyanonymous:master' into master 2025-05-28 01:06:44 +03:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. (#8307) 2025-05-27 15:09:05 -04:00
patientx
8609a6dced
Merge branch 'comfyanonymous:master' into master 2025-05-27 01:03:35 +03:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. (#8294) 2025-05-26 16:39:27 -04:00
patientx
bbcb33ea72
Merge branch 'comfyanonymous:master' into master 2025-05-26 16:26:39 +03:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 (#8282) 2025-05-26 04:29:25 -04:00
patientx
48bbdd0842
Merge branch 'comfyanonymous:master' into master 2025-05-25 15:38:51 +03:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. (#8272) 2025-05-25 05:28:11 -04:00
patientx
9790aaac7b
Merge branch 'comfyanonymous:master' into master 2025-05-24 14:00:54 +03:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
patientx
3b69a08c08
Merge branch 'comfyanonymous:master' into master 2025-05-24 04:06:28 +03:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
patientx
3e49c3e2ff
Merge branch 'comfyanonymous:master' into master 2025-05-24 00:01:56 +03:00
drhead
30b2eb8a93
create arange on-device (#8255) 2025-05-23 16:15:06 -04:00
patientx
c653935b37
Merge branch 'comfyanonymous:master' into master 2025-05-23 13:57:11 +03:00