Commit Graph

1975 Commits

Author SHA1 Message Date
patientx
634c398fd5
Merge branch 'comfyanonymous:master' into master 2025-05-04 16:14:44 +03:00
comfyanonymous
80a44b97f5
Change lumina to native RMSNorm. (#7935) 2025-05-04 06:39:23 -04:00
comfyanonymous
9187a09483
Change cosmos and hydit models to use the native RMSNorm. (#7934) 2025-05-04 06:26:20 -04:00
patientx
be4b829886
Merge branch 'comfyanonymous:master' into master 2025-05-04 03:46:17 +03:00
comfyanonymous
3041e5c354
Switch mochi and wan modes to use pytorch RMSNorm. (#7925)
* Switch genmo model to native RMSNorm.

* Switch WAN to native RMSNorm.
2025-05-03 19:07:55 -04:00
patientx
f98aad15d5
Merge branch 'comfyanonymous:master' into master 2025-05-02 23:58:33 +03:00
patientx
1068783ff8
Update zluda.py 2025-05-02 20:21:10 +03:00
patientx
3b827e0a59
Update zluda.py 2025-05-02 20:20:41 +03:00
Kohaku-Blueleaf
2ab9618732
Fix the bugs in OFT/BOFT moule (#7909)
* Correct calculate_weight and load for OFT

* Correct calculate_weight and loading for BOFT
2025-05-02 13:12:37 -04:00
patientx
bc1fa6e013
Create zluda.py (custom zluda for miopen-triton) 2025-05-02 17:44:26 +03:00
patientx
073ff4a11d
Create placeholder 2025-05-01 23:05:14 +03:00
patientx
aad001bfbe
Add files via upload 2025-05-01 23:04:46 +03:00
patientx
caa3597bb7
Create placeholder 2025-05-01 23:03:51 +03:00
patientx
085925ddf7
Update zluda.py 2025-05-01 20:49:28 +03:00
patientx
da173f67b3
Merge branch 'comfyanonymous:master' into master 2025-05-01 17:23:12 +03:00
comfyanonymous
aa9d759df3
Switch ltxv to use the pytorch RMSNorm. (#7897) 2025-05-01 06:33:42 -04:00
comfyanonymous
08ff5fa08a Cleanup chroma PR. 2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG (#7355)
* Upload files for Chroma Implementation

* Remove trailing whitespace

* trim more trailing whitespace..oops

* remove unused imports

* Add supported_inference_dtypes

* Set min_length to 0 and remove attention_mask=True

* Set min_length to 1

* get_mdulations added from blepping and minor changes

* Add lora conversion if statement in lora.py

* Update supported_models.py

* update model_base.py

* add uptream commits

* set modelType.FLOW, will cause beta scheduler to work properly

* Adjust memory usage factor and remove unnecessary code

* fix mistake

* reduce code duplication

* remove unused imports

* refactor for upstream sync

* sync chroma-support with upstream via syncbranch patch

* Update sd.py

* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master 2025-04-30 13:46:06 +03:00
comfyanonymous
dbc726f80c
Better vace memory estimation. (#7875) 2025-04-29 20:42:00 -04:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. (#7873) 2025-04-29 20:28:52 -04:00
patientx
64709ce55c
Merge branch 'comfyanonymous:master' into master 2025-04-29 14:18:45 +03:00
guill
68f0d35296
Add support for VIDEO as a built-in type (#7844)
* Add basic support for videos as types

This PR adds support for VIDEO as first-class types. In order to avoid
unnecessary costs, VIDEO outputs must implement the `VideoInput` ABC,
but their implementation details can vary. Included are two
implementations of this type which can be returned by other nodes:

* `VideoFromFile` - Created with either a path on disk (as a string) or
  a `io.BytesIO` containing the contents of a file in a supported format
  (like .mp4). This implementation won't actually load the video unless
  necessary. It will also avoid re-encoding when saving if possible.
* `VideoFromComponents` - Created from an image tensor and an optional
  audio tensor.

Currently, only h264 encoded videos in .mp4 containers are supported for
saving, but the plan is to add additional encodings/containers in the
near future (particularly .webm).

* Add optimization to avoid parsing entire video

* Improve type declarations to reduce warnings

* Make sure bytesIO objects can be read many times

* Fix a potential issue when saving long videos

* Fix incorrect type annotation

* Add a `LoadVideo` node to make testing easier

* Refactor new types out of the base comfy folder

I've created a new `comfy_api` top-level module. The intention is that
anything within this folder would be covered by semver-style versioning
that would allow custom nodes to rely on them not introducing breaking
changes.

* Fix linting issue
2025-04-29 05:58:00 -04:00
patientx
0aeb958ea5
Merge branch 'comfyanonymous:master' into master 2025-04-29 01:49:37 +03:00
comfyanonymous
83d04717b6
Support HiDream E1 model. (#7857) 2025-04-28 15:01:15 -04:00
chaObserv
c15909bb62
CFG++ for gradient estimation sampler (#7809) 2025-04-28 13:51:35 -04:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. (#7856) 2025-04-28 13:07:21 -04:00
Pam
30159a7fe6
Save v pred zsnr metadata (#7840) 2025-04-28 13:03:21 -04:00
patientx
6244dfa1e1
Merge branch 'comfyanonymous:master' into master 2025-04-28 01:13:53 +03:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. (#7833) 2025-04-27 05:38:11 -04:00
patientx
ec34f4da57
Merge branch 'comfyanonymous:master' into master 2025-04-27 05:01:55 +03:00
comfyanonymous
ac10a0d69e
Make loras work with --async-offload (#7824) 2025-04-26 19:56:22 -04:00
patientx
9cc8e2e1d0
Merge branch 'comfyanonymous:master' into master 2025-04-26 23:32:14 +03:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. (#7820)
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
patientx
d6238ed3f0
Merge branch 'comfyanonymous:master' into master 2025-04-26 15:31:39 +03:00
comfyanonymous
23e39f2ba7
Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
patientx
57a5e6e7ae
Merge branch 'comfyanonymous:master' into master 2025-04-26 01:23:01 +03:00
AustinMroz
78992c4b25
[NodeDef] Add documentation on widgetType (#7768)
* [NodeDef] Add documentation on widgetType

* Document required version for widgetType
2025-04-25 13:35:07 -04:00
patientx
224f72f90f
Merge branch 'comfyanonymous:master' into master 2025-04-25 14:11:50 +03:00
comfyanonymous
f935d42d8e Support SimpleTuner lycoris lora format for HiDream. 2025-04-25 03:11:14 -04:00
patientx
1d9338b4b9
Merge branch 'comfyanonymous:master' into master 2025-04-24 14:50:59 +03:00
thot experiment
e2eed9eb9b
throw away alpha channel in clip vision preprocessor (#7769)
saves users having to explicitly discard the channel
2025-04-23 21:28:36 -04:00
patientx
d8a75b86e9
Update zluda.py 2025-04-24 03:11:46 +03:00
patientx
91d7a9d234
Merge branch 'comfyanonymous:master' into master 2025-04-23 12:41:27 +03:00
comfyanonymous
552615235d
Fix for dino lowvram. (#7748) 2025-04-23 04:12:52 -04:00
Robin Huang
0738e4ea5d
[API nodes] Add backbone for supporting api nodes in ComfyUI (#7745)
* Add Ideogram generate node.

* Add staging api.

* COMFY_API_NODE_NAME node property

* switch to boolean flag and use original node name for id

* add optional to type

* Add API_NODE and common error for missing auth token (#5)

* Add Minimax Video Generation + Async Task queue polling example (#6)

* [Minimax] Show video preview and embed workflow in ouput (#7)

* [API Nodes] Send empty request body instead of empty dictionary. (#8)

* Fixed: removed function from rebase.

* Add pydantic.

* Remove uv.lock

* Remove polling operations.

* Update stubs workflow.

* Remove polling comments.

* Update stubs.

* Use pydantic v2.

* Use pydantic v2.

* Add basic OpenAITextToImage node

* Add.

* convert image to tensor.

* Improve types.

* Ruff.

* Push tests.

* Handle multi-form data.

- Don't set content-type for multi-part/form
- Use data field instead of JSON

* Change to api.comfy.org

* Handle error code 409.

* Remove nodes.

---------

Co-authored-by: bymyself <cbyrne@comfy.org>
Co-authored-by: Yoland Y <4950057+yoland68@users.noreply.github.com>
2025-04-23 02:18:08 -04:00
patientx
a397c3aeb3
Merge branch 'comfyanonymous:master' into master 2025-04-22 13:28:29 +03:00
comfyanonymous
2d6805ce57
Add option for using fp8_e8m0fnu for model weights. (#7733)
Seems to break every model I have tried but worth testing?
2025-04-22 06:17:38 -04:00
patientx
c09fa908f5
Merge branch 'comfyanonymous:master' into master 2025-04-22 12:13:56 +03:00
Kohaku-Blueleaf
a8f63c0d5b
Support dora_scale on both axis (#7727) 2025-04-22 05:01:27 -04:00
Kohaku-Blueleaf
966c43ce26
Add OFT/BOFT algorithm in weight adapter (#7725) 2025-04-22 04:59:47 -04:00
comfyanonymous
3ab231f01f
Fix issue with WAN VACE implementation. (#7724) 2025-04-21 23:36:12 -04:00
Kohaku-Blueleaf
1f3fba2af5
Unified Weight Adapter system for better maintainability and future feature of Lora system (#7540) 2025-04-21 20:15:32 -04:00
comfyanonymous
5d0d4ee98a
Add strength control for vace. (#7717) 2025-04-21 19:36:20 -04:00
patientx
32ec658779
Merge branch 'comfyanonymous:master' into master 2025-04-21 23:16:54 +03:00
filtered
5d51794607
Add node type hint for socketless option (#7714)
* Add node type hint for socketless option

* nit - Doc
2025-04-21 16:13:00 -04:00
comfyanonymous
ce22f687cc
Support for WAN VACE preview model. (#7711)
* Support for WAN VACE preview model.

* Remove print.
2025-04-21 14:40:29 -04:00
patientx
d926896b55
Merge branch 'comfyanonymous:master' into master 2025-04-21 00:00:31 +03:00
comfyanonymous
2c735c13b4
Slightly better fix for #7687 2025-04-20 11:33:27 -04:00
patientx
0143b19681
Merge branch 'comfyanonymous:master' into master 2025-04-20 15:05:36 +03:00
comfyanonymous
fd27494441 Use empty t5 of size 128 for hidream, seems to give closer results. 2025-04-19 19:49:40 -04:00
power88
f43e1d7f41
Hidream: Allow loading hidream text encoders in CLIPLoader and DualCLIPLoader (#7676)
* Hidream: Allow partial loading text encoders

* reformat code for ruff check.
2025-04-19 19:47:30 -04:00
patientx
cd77ded0c7
Merge branch 'comfyanonymous:master' into master 2025-04-19 23:20:10 +03:00
comfyanonymous
636d4bfb89 Fix hard crash when the spiece tokenizer path is bad. 2025-04-19 15:55:43 -04:00
patientx
682a70bcf9
Update zluda.py 2025-04-19 00:53:59 +03:00
patientx
71ac9830ab
updated comfyui-frontend version 2025-04-17 22:15:11 +03:00
patientx
95773a0045
updated "comfyui-frontend version" - added "comfyui-workflow-templates" 2025-04-17 22:13:36 +03:00
patientx
d9211bbf1e
Merge branch 'comfyanonymous:master' into master 2025-04-17 20:51:12 +03:00
comfyanonymous
dbcfd092a2 Set default context_img_len to 257 2025-04-17 12:42:34 -04:00
comfyanonymous
c14429940f Support loading WAN FLF model. 2025-04-17 12:04:48 -04:00
patientx
6bbe3b19d4
Merge branch 'comfyanonymous:master' into master 2025-04-17 14:51:08 +03:00
comfyanonymous
0d720e4367 Don't hardcode length of context_img in wan code. 2025-04-17 06:25:39 -04:00
patientx
7950c510e7
Merge branch 'comfyanonymous:master' into master 2025-04-17 10:10:03 +03:00
comfyanonymous
1fc00ba4b6 Make hidream work with any latent resolution. 2025-04-16 18:34:14 -04:00
patientx
d05e9b8298
Merge branch 'comfyanonymous:master' into master 2025-04-17 01:22:11 +03:00
comfyanonymous
9899d187b1 Limit T5 to 128 tokens for HiDream: #7620 2025-04-16 18:07:55 -04:00
comfyanonymous
f00f340a56 Reuse code from flux model. 2025-04-16 17:43:55 -04:00
patientx
503292b3c0
Merge branch 'comfyanonymous:master' into master 2025-04-16 23:26:10 +03:00
Chenlei Hu
cce1d9145e
[Type] Mark input options NotRequired (#7614) 2025-04-16 15:41:00 -04:00
patientx
765224ad78
Merge branch 'comfyanonymous:master' into master 2025-04-16 12:10:01 +03:00
comfyanonymous
b4dc03ad76 Fix issue on old torch. 2025-04-16 04:53:56 -04:00
patientx
eee802e685
Update zluda.py 2025-04-16 00:53:18 +03:00
patientx
ad2fa1a675
Merge branch 'comfyanonymous:master' into master 2025-04-16 00:50:26 +03:00
comfyanonymous
9ad792f927 Basic support for hidream i1 model. 2025-04-15 17:35:05 -04:00
patientx
ab0860b96a
Merge branch 'comfyanonymous:master' into master 2025-04-15 21:37:22 +03:00
comfyanonymous
6fc5dbd52a Cleanup. 2025-04-15 12:13:28 -04:00
patientx
af58afcae8
Merge branch 'comfyanonymous:master' into master 2025-04-15 18:23:31 +03:00
comfyanonymous
3e8155f7a3 More flexible long clip support.
Add clip g long clip support.

Text encoder refactor.

Support llama models with different vocab sizes.
2025-04-15 10:32:21 -04:00
patientx
09491fecbd
Merge branch 'comfyanonymous:master' into master 2025-04-15 14:41:15 +03:00
comfyanonymous
8a438115fb add RMSNorm to comfy.ops 2025-04-14 18:00:33 -04:00
patientx
205bdad97e
fix frontend package version 2025-04-13 17:12:14 +03:00
patientx
b6d5765f0f
Added onnxruntime patching 2025-04-13 17:11:13 +03:00
patientx
f1513262d3
Merge branch 'comfyanonymous:master' into master 2025-04-13 03:21:00 +03:00
chaObserv
e51d9ba5fc
Add SEEDS (stage 2 & 3 DP) sampler (#7580)
* Add seeds stage 2 & 3 (DP) sampler

* Change the name to SEEDS in comment
2025-04-12 18:36:08 -04:00
catboxanon
1714a4c158
Add CublasOps support (#7574)
* CublasOps support

* Guard CublasOps behind --fast arg
2025-04-12 18:29:15 -04:00
patientx
81b9fcfe4d
Merge branch 'comfyanonymous:master' into master 2025-04-11 19:55:56 +03:00
Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines (#7509)
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.

* remove test code

* tidy code
2025-04-11 06:55:51 -04:00
patientx
163780343e
updated comfyui-frontend version 2025-04-11 00:32:22 +03:00
patientx
27f850ff8a
Merge branch 'comfyanonymous:master' into master 2025-04-11 00:23:38 +03:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput (#7551)
* Deprecate InputTypeOptions.defaultInput

* nit

* nit
2025-04-10 06:57:06 -04:00
patientx
ae8488fdc7
Merge branch 'comfyanonymous:master' into master 2025-04-09 19:53:21 +03:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape (#7500) 2025-04-09 09:43:35 -04:00
patientx
2bf016391a
Merge branch 'comfyanonymous:master' into master 2025-04-07 13:01:39 +03:00
comfyanonymous
70d7242e57 Support the wan fun reward loras. 2025-04-07 05:01:47 -04:00
patientx
137ab318e1
Merge branch 'comfyanonymous:master' into master 2025-04-05 16:30:54 +03:00
comfyanonymous
3bfe4e5276 Support 512 siglip model. 2025-04-05 07:01:01 -04:00
patientx
c90f1b7948
Merge branch 'comfyanonymous:master' into master 2025-04-05 13:39:32 +03:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models (#7482)
* Add activations_shape info in UNet models

* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55 Disable partial offloading of audio VAE. 2025-04-04 21:24:56 -04:00
patientx
4541842b9a
Merge branch 'comfyanonymous:master' into master 2025-04-03 03:15:32 +03:00
BiologicalExplosion
2222cf67fd
MLU memory optimization (#7470)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
patientx
1040220970
Merge branch 'comfyanonymous:master' into master 2025-04-01 22:56:01 +03:00
BVH
301e26b131
Add option to store TE in bf16 (#7461) 2025-04-01 13:48:53 -04:00
patientx
b7d9be6864
Merge branch 'comfyanonymous:master' into master 2025-03-30 14:17:07 +03:00
comfyanonymous
a3100c8452 Remove useless code. 2025-03-29 20:12:56 -04:00
patientx
f02045a45d
Merge branch 'comfyanonymous:master' into master 2025-03-28 16:58:48 +03:00
comfyanonymous
2d17d8910c Don't error if wan concat image has extra channels. 2025-03-28 08:49:29 -04:00
patientx
01f2da55f9
Update zluda.py 2025-03-28 10:33:39 +03:00
patientx
9d401fe602
Merge branch 'comfyanonymous:master' into master 2025-03-27 22:52:53 +03:00
comfyanonymous
0a1f8869c9 Add WanFunInpaintToVideo node for the Wan fun inpaint models. 2025-03-27 11:13:27 -04:00
patientx
39cf3cdc32
Merge branch 'comfyanonymous:master' into master 2025-03-27 12:35:23 +03:00
comfyanonymous
3661c833bc Support the WAN 2.1 fun control models.
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master 2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1 Support more float8 types. 2025-03-25 05:23:49 -04:00
patientx
87e937ecd6
Merge branch 'comfyanonymous:master' into master 2025-03-23 18:32:46 +03:00
comfyanonymous
e471c726e5 Fallback to pytorch attention if sage attention fails. 2025-03-22 15:45:56 -04:00
patientx
64008960e9
Update zluda.py 2025-03-22 13:53:47 +03:00
patientx
a6db9cc07a
Merge branch 'comfyanonymous:master' into master 2025-03-22 13:52:43 +03:00
comfyanonymous
d9fa9d307f Automatically set the right sampling type for lotus. 2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation (#7125)
* draft pass at a native comfy implementation of Lotus-D depth and normal est

* fix model_sampling kludges

* fix ruff

---------

Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
patientx
cf9ad7aae3
Update zluda.py 2025-03-21 12:21:23 +03:00
patientx
28a4de830c
Merge branch 'comfyanonymous:master' into master 2025-03-20 14:23:30 +03:00
comfyanonymous
3872b43d4b A few fixes for the hunyuan3d models. 2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7 Fix orientation of hunyuan 3d model. 2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab Initial Hunyuan3Dv2 implementation.
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
patientx
4115126a82
Update zluda.py 2025-03-19 12:15:38 +03:00
patientx
34b87d4542
Merge branch 'comfyanonymous:master' into master 2025-03-18 17:14:19 +03:00
comfyanonymous
3b19fc76e3 Allow disabling pe in flux code for some other models. 2025-03-18 05:09:25 -04:00
patientx
c853489349
Merge branch 'comfyanonymous:master' into master 2025-03-18 01:12:05 +03:00
comfyanonymous
50614f1b79 Fix regression with clip vision. 2025-03-17 13:56:11 -04:00
patientx
97dd7b8b52
Merge branch 'comfyanonymous:master' into master 2025-03-17 13:10:34 +03:00
comfyanonymous
6dc7b0bfe3 Add support for giant dinov2 image encoder. 2025-03-17 05:53:54 -04:00
patientx
30e3177e00
Merge branch 'comfyanonymous:master' into master 2025-03-16 21:26:31 +03:00
comfyanonymous
e8e990d6b8 Cleanup code. 2025-03-16 06:29:12 -04:00
patientx
dcc409faa4
Merge branch 'comfyanonymous:master' into master 2025-03-16 13:05:56 +03:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload (#7253)
* Call unpatch_hooks at the start of ModelPatcher.partially_unload

* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00
chaObserv
fd5297131f
Guard the edge cases of noise term in er_sde (#7265) 2025-03-16 06:02:25 -04:00
patientx
bc664959a4
Merge branch 'comfyanonymous:master' into master 2025-03-15 17:11:51 +03:00
comfyanonymous
55a1b09ddc Allow loading diffusion model files with the "Load Checkpoint" node. 2025-03-15 08:27:49 -04:00
comfyanonymous
3c3988df45 Show a better error message if the VAE is invalid. 2025-03-15 08:26:36 -04:00