comfyanonymous
c012400240
Initial support for qwen image model. ( #9179 )
2025-08-04 22:53:25 -04:00
doctorpangloss
7ba86929f7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-08-04 13:44:45 -07:00
comfyanonymous
03895dea7c
Fix another issue with the PR. ( #9170 )
2025-08-04 04:33:04 -04:00
comfyanonymous
84f9759424
Add some warnings and prevent crash when cond devices don't match. ( #9169 )
2025-08-04 04:20:12 -04:00
comfyanonymous
7991341e89
Various fixes for broken things from earlier PR. ( #9168 )
2025-08-04 04:02:40 -04:00
comfyanonymous
140ffc7fdc
Fix broken controlnet from last PR. ( #9167 )
2025-08-04 03:28:12 -04:00
comfyanonymous
182f90b5ec
Lower cond vram use by casting at the same time as device transfer. ( #9159 )
2025-08-04 03:11:53 -04:00
comfyanonymous
aebac22193
Cleanup. ( #9160 )
2025-08-03 07:08:11 -04:00
comfyanonymous
13aaa66ec2
Make sure context is on the right device. ( #9154 )
2025-08-02 15:09:23 -04:00
comfyanonymous
5f582a9757
Make sure all the conds are on the right device. ( #9151 )
2025-08-02 15:00:13 -04:00
doctorpangloss
59bc6afa7b
Fixes to tests and progress bars in xet
2025-08-01 17:26:30 -07:00
doctorpangloss
87bed08124
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-08-01 16:05:47 -07:00
comfyanonymous
1e638a140b
Tiny wan vae optimizations. ( #9136 )
2025-08-01 05:25:38 -04:00
doctorpangloss
3c9d311dee
Improvements to torch.compile
...
- the model weights generally have to be patched ahead of time for compilation to work
- the model downloader matches the folder_paths API a bit better
- tweak the logging from the execution node
2025-07-30 19:27:40 -07:00
doctorpangloss
83184916c1
Improvements to GGUF
...
- GGUF now works and included 4 bit quants, this will allow WAN to run on 24GB VRAM GPUs
- logger only shows full stack for errors
- helper functions for colab notebook
- fix nvcr.io auth error
- lora issues are now reported better
- model downloader will use huggingface cache and symlinking if it is supported on your platform
- torch compile node now correctly patches the model before compilation
2025-07-30 18:28:52 -07:00
chaObserv
61b08d4ba6
Replace manual x * sigmoid(x) with torch silu in VAE nonlinearity ( #9057 )
2025-07-30 19:25:56 -04:00
comfyanonymous
da9dab7edd
Small wan camera memory optimization. ( #9111 )
2025-07-30 05:55:26 -04:00
comfyanonymous
dca6bdd4fa
Make wan2.2 5B i2v take a lot less memory. ( #9102 )
2025-07-29 19:44:18 -04:00
comfyanonymous
7d593baf91
Extra reserved vram on large cards on windows. ( #9093 )
2025-07-29 04:07:45 -04:00
doctorpangloss
69a4906964
Experimental GGUF support
2025-07-28 17:02:20 -07:00
doctorpangloss
fe4057385d
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-28 14:38:18 -07:00
doctorpangloss
03e5430121
Improvements for Wan 2.2 support
...
- add xet support and add the xet cache to manageable directories
- xet is enabled by default
- fix logging to root in various places
- improve logging about model unloading and loading
- TorchCompileNode now supports the VAE
- torchaudio missing will cause less noise in the logs
- feature flags will assume to be supporting everything in the distributed progress context
- fixes progress notifications
2025-07-28 14:36:27 -07:00
comfyanonymous
c60dc4177c
Remove unecessary clones in the wan2.2 VAE. ( #9083 )
2025-07-28 14:48:19 -04:00
doctorpangloss
b5a50301f6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-28 10:06:47 -07:00
comfyanonymous
a88788dce6
Wan 2.2 support. ( #9080 )
2025-07-28 08:00:23 -04:00
Benjamin Berman
583ddd6b38
Always use spawn context, even on Linux
2025-07-26 17:42:44 -07:00
comfyanonymous
0621d73a9c
Remove useless code. ( #9059 )
2025-07-26 04:44:19 -04:00
doctorpangloss
a3ae6e74d2
fix tests
2025-07-25 15:01:30 -07:00
doctorpangloss
3684cff31b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-25 12:48:05 -07:00
comfyanonymous
e6e5d33b35
Remove useless code. ( #9041 )
...
This is only needed on old pytorch 2.0 and older.
2025-07-25 04:58:28 -04:00
Eugene Fairley
4293e4da21
Add WAN ATI support ( #8874 )
...
* Add WAN ATI support
* Fixes
* Fix length
* Remove extra functions
* Fix
* Fix
* Ruff fix
* Remove torch.no_grad
* Add batch trajectory logic
* Scale inputs before and after motion patch
* Batch image/trajectory
* Ruff fix
* Clean up
2025-07-24 20:59:19 -04:00
comfyanonymous
69cb57b342
Print xpu device name. ( #9035 )
2025-07-24 15:06:25 -04:00
honglyua
0ccc88b03f
Support Iluvatar CoreX ( #8585 )
...
* Support Iluvatar CoreX
Co-authored-by: mingjiang.li <mingjiang.li@iluvatar.com>
2025-07-24 13:57:36 -04:00
Kohaku-Blueleaf
eb2f78b4e0
[Training Node] algo support, grad acc, optional grad ckpt ( #9015 )
...
* Add factorization utils for lokr
* Add lokr train impl
* Add loha train impl
* Add adapter map for algo selection
* Add optional grad ckpt and algo selection
* Update __init__.py
* correct key name for loha
* Use custom fwd/bwd func and better init for loha
* Support gradient accumulation
* Fix bugs of loha
* use more stable init
* Add OFT training
* linting
2025-07-23 20:57:27 -04:00
chaObserv
e729a5cc11
Separate denoised and noise estimation in Euler CFG++ ( #9008 )
...
This will change their behavior with the sampling CONST type.
It also combines euler_cfg_pp and euler_ancestral_cfg_pp into one main function.
2025-07-23 19:47:05 -04:00
comfyanonymous
d3504e1778
Enable pytorch attention by default for gfx1201 on torch 2.8 ( #9029 )
2025-07-23 19:21:29 -04:00
comfyanonymous
a86a58c308
Fix xpu function not implemented p2. ( #9027 )
2025-07-23 18:18:20 -04:00
comfyanonymous
39dda1d40d
Fix xpu function not implemented. ( #9026 )
2025-07-23 18:10:59 -04:00
comfyanonymous
5ad33787de
Add default device argument. ( #9023 )
2025-07-23 14:20:49 -04:00
Simon Lui
255f139863
Add xpu version for async offload and some other things. ( #9004 )
2025-07-22 15:20:09 -04:00
doctorpangloss
9fec871dbc
fix logger mispelling colab, add readme instructions for running in colab and so called comfyui portable
2025-07-17 16:01:03 -07:00
doctorpangloss
9376830295
fix windows standalone build
2025-07-17 14:18:01 -07:00
doctorpangloss
c2c45e061e
fix noisy logging messages
2025-07-17 13:06:55 -07:00
doctorpangloss
d709e10bcb
update docker image
2025-07-17 12:45:59 -07:00
doctorpangloss
8fcccad6a2
fix linting error
2025-07-16 15:32:58 -07:00
doctorpangloss
e455b0ad2e
fix opentel
2025-07-16 15:19:19 -07:00
doctorpangloss
6ecb9e71f6
this is not disabling the meterprovider successfully
2025-07-16 14:33:06 -07:00
doctorpangloss
8176078118
add configuration args types
2025-07-16 14:26:42 -07:00
doctorpangloss
94310e51e3
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-16 14:24:12 -07:00
doctorpangloss
d4d62500ac
fix tests, disable opentel metrics provider
2025-07-16 14:21:50 -07:00
doctorpangloss
f23fe2e2cf
updates to dockerfile and pyproject.yaml
2025-07-16 13:46:05 -07:00
comfyanonymous
491fafbd64
Silence clip tokenizer warning. ( #8934 )
2025-07-16 14:42:07 -04:00
Harel Cain
9bc2798f72
LTXV VAE decoder: switch default padding mode ( #8930 )
2025-07-16 13:54:38 -04:00
comfyanonymous
50afba747c
Add attempt to work around the safetensors mmap issue. ( #8928 )
2025-07-16 03:42:17 -04:00
doctorpangloss
bf3345e083
Add support for ComfyUI Manager
2025-07-15 14:06:29 -07:00
doctorpangloss
f1cfff6da5
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-15 10:21:19 -07:00
doctorpangloss
96b4e04315
packaging fixes
...
- enable user db
- fix main_pre order everywhere
- fix absolute to relative imports everywhere
- async better supported
2025-07-15 10:19:33 -07:00
Yoland Yan
543c24108c
Fix wrong reference bug ( #8910 )
2025-07-14 20:45:55 -04:00
doctorpangloss
c086c5e005
fix pylint issues
2025-07-14 17:44:43 -07:00
doctorpangloss
499f9be5fa
merging upstream with hiddenswitch
...
- deprecate our preview image fork of the frontend because upstream now has the needed functionality
- merge the context executor from upstream into ours
2025-07-14 16:55:47 -07:00
doctorpangloss
bd6f28e3bd
Move back to comfy_execution directory
2025-07-14 13:48:34 -07:00
doctorpangloss
04e411c32e
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-07-14 13:45:09 -07:00
comfyanonymous
b40143984c
Add model detection error hint for lora. ( #8880 )
2025-07-12 03:49:26 -04:00
doctorpangloss
f576f8124a
record workflow when there's an error
2025-07-11 13:51:45 -07:00
comfyanonymous
938d3e8216
Remove windows line endings. ( #8866 )
2025-07-11 02:37:51 -04:00
guill
2b653e8c18
Support for async node functions ( #8830 )
...
* Support for async execution functions
This commit adds support for node execution functions defined as async. When
a node's execution function is defined as async, we can continue
executing other nodes while it is processing.
Standard uses of `await` should "just work", but people will still have
to be careful if they spawn actual threads. Because torch doesn't really
have async/await versions of functions, this won't particularly help
with most locally-executing nodes, but it does work for e.g. web
requests to other machines.
In addition to the execute function, the `VALIDATE_INPUTS` and
`check_lazy_status` functions can also be defined as async, though we'll
only resolve one node at a time right now for those.
* Add the execution model tests to CI
* Add a missing file
It looks like this got caught by .gitignore? There's probably a better
place to put it, but I'm not sure what that is.
* Add the websocket library for automated tests
* Add additional tests for async error cases
Also fixes one bug that was found when an async function throws an error
after being scheduled on a task.
* Add a feature flags message to reduce bandwidth
We now only send 1 preview message of the latest type the client can
support.
We'll add a console warning when the client fails to send a feature
flags message at some point in the future.
* Add async tests to CI
* Don't actually add new tests in this PR
Will do it in a separate PR
* Resolve unit test in GPU-less runner
* Just remove the tests that GHA can't handle
* Change line endings to UNIX-style
* Avoid loading model_management.py so early
Because model_management.py has a top-level `logging.info`, we have to
be careful not to import that file before we call `setup_logging`. If we
do, we end up having the default logging handler registered in addition
to our custom one.
2025-07-10 14:46:19 -04:00
doctorpangloss
1338e8935f
fix metrics exporter messing up health endpoint
2025-07-09 07:15:42 -07:00
doctorpangloss
b268296504
update upstream for flux fixes
2025-07-09 06:28:17 -07:00
chaObserv
aac10ad23a
Add SA-Solver sampler ( #8834 )
2025-07-08 16:17:06 -04:00
josephrocca
974254218a
Un-hardcode chroma patch_size ( #8840 )
2025-07-08 15:56:59 -04:00
comfyanonymous
75d327abd5
Remove some useless code. ( #8812 )
2025-07-06 07:07:39 -04:00
comfyanonymous
ee615ac269
Add warning when loading file unsafely. ( #8800 )
2025-07-05 14:34:57 -04:00
chaObserv
f41f323c52
Add the denoising step to several samplers ( #8780 )
2025-07-03 19:20:53 -04:00
City
d9277301d2
Initial code for new SLG node ( #8759 )
2025-07-02 20:13:43 -04:00
comfyanonymous
111f583e00
Fallback to regular op when fp8 op throws exception. ( #8761 )
2025-07-02 00:57:13 -04:00
chaObserv
b22e97dcfa
Migrate ER-SDE from VE to VP algorithm and add its sampler node ( #8744 )
...
Apply alpha scaling in the algorithm for reverse-time SDE and add custom ER-SDE sampler node for other solver types (SDE, ODE).
2025-07-01 02:38:52 -04:00
comfyanonymous
170c7bb90c
Fix contiguous issue with pytorch nightly. ( #8729 )
2025-06-29 06:38:40 -04:00
comfyanonymous
396454fa41
Reorder the schedulers so simple is the default one. ( #8722 )
2025-06-28 18:12:56 -04:00
xufeng
ba9548f756
“--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes ( #8592 )
...
* feat: “--whitelist-custom-nodes” args for comfy core to go with “--disable-all-custom-nodes” for development purposes
* feat: Simplify custom nodes whitelist logic to use consistent code paths
2025-06-28 15:24:02 -04:00
comfyanonymous
c36be0ea09
Fix memory estimation bug with kontext. ( #8709 )
2025-06-27 17:21:12 -04:00
comfyanonymous
9093301a49
Don't add tiny bit of random noise when VAE encoding. ( #8705 )
...
Shouldn't change outputs but might make things a tiny bit more
deterministic.
2025-06-27 14:14:56 -04:00
doctorpangloss
a7aff3565b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-26 16:57:25 -07:00
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. ( #8679 )
2025-06-26 11:28:41 -04:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. ( #8672 )
2025-06-26 03:39:09 -04:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. ( #8669 )
2025-06-25 19:35:57 -04:00
comfyanonymous
7a13f74220
unet -> diffusion model ( #8659 )
2025-06-25 04:52:34 -04:00
doctorpangloss
e034d0bb24
fix sampling bug
2025-06-24 16:37:20 -07:00
doctorpangloss
f6339e8115
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2025-06-24 12:52:54 -07:00
doctorpangloss
f8eea225d4
modify main.py
2025-06-24 12:30:54 -07:00
doctorpangloss
8838957554
move main.py
2025-06-24 12:30:32 -07:00
doctorpangloss
8eba1733c8
remove main.py
2025-06-24 12:29:47 -07:00
chaObserv
8042eb20c6
Singlestep DPM++ SDE for RF ( #8627 )
...
Refactor the algorithm, and apply alpha scaling.
2025-06-24 14:59:09 -04:00
comfyanonymous
1883e70b43
Fix exception when using a noise mask with cosmos predict2. ( #8621 )
...
* Fix exception when using a noise mask with cosmos predict2.
* Fix ruff.
2025-06-21 03:30:39 -04:00
comfyanonymous
f7fb193712
Small flux optimization. ( #8611 )
2025-06-20 05:37:32 -04:00
comfyanonymous
7e9267fa77
Make flux controlnet work with sd3 text enc. ( #8599 )
2025-06-19 18:50:05 -04:00
comfyanonymous
91d40086db
Fix pytorch warning. ( #8593 )
2025-06-19 11:04:52 -04:00
Benjamin Berman
8041b1b54d
fix ideogram seed, fix polling for results, fix logger
2025-06-18 09:38:20 -07:00
Benjamin Berman
d9ee4137c7
gracefully handle logs when they are never set up (not sure why this is occurring)
2025-06-18 06:14:09 -07:00
Benjamin Berman
f507bec91a
fix issue with queue retrieval in distributed environment, fix text progress, fix folder paths being aggressively resolved, fix ideogram seed
2025-06-18 02:43:50 -07:00
doctorpangloss
1d29c97266
fix tests
2025-06-17 18:38:42 -07:00