comfyanonymous
3041e5c354
Switch mochi and wan modes to use pytorch RMSNorm. ( #7925 )
...
* Switch genmo model to native RMSNorm.
* Switch WAN to native RMSNorm.
2025-05-03 19:07:55 -04:00
patientx
f98aad15d5
Merge branch 'comfyanonymous:master' into master
2025-05-02 23:58:33 +03:00
patientx
1068783ff8
Update zluda.py
2025-05-02 20:21:10 +03:00
patientx
3b827e0a59
Update zluda.py
2025-05-02 20:20:41 +03:00
Kohaku-Blueleaf
2ab9618732
Fix the bugs in OFT/BOFT moule ( #7909 )
...
* Correct calculate_weight and load for OFT
* Correct calculate_weight and loading for BOFT
2025-05-02 13:12:37 -04:00
patientx
bc1fa6e013
Create zluda.py (custom zluda for miopen-triton)
2025-05-02 17:44:26 +03:00
patientx
073ff4a11d
Create placeholder
2025-05-01 23:05:14 +03:00
patientx
aad001bfbe
Add files via upload
2025-05-01 23:04:46 +03:00
patientx
caa3597bb7
Create placeholder
2025-05-01 23:03:51 +03:00
patientx
085925ddf7
Update zluda.py
2025-05-01 20:49:28 +03:00
patientx
da173f67b3
Merge branch 'comfyanonymous:master' into master
2025-05-01 17:23:12 +03:00
comfyanonymous
aa9d759df3
Switch ltxv to use the pytorch RMSNorm. ( #7897 )
2025-05-01 06:33:42 -04:00
comfyanonymous
08ff5fa08a
Cleanup chroma PR.
2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG ( #7355 )
...
* Upload files for Chroma Implementation
* Remove trailing whitespace
* trim more trailing whitespace..oops
* remove unused imports
* Add supported_inference_dtypes
* Set min_length to 0 and remove attention_mask=True
* Set min_length to 1
* get_mdulations added from blepping and minor changes
* Add lora conversion if statement in lora.py
* Update supported_models.py
* update model_base.py
* add uptream commits
* set modelType.FLOW, will cause beta scheduler to work properly
* Adjust memory usage factor and remove unnecessary code
* fix mistake
* reduce code duplication
* remove unused imports
* refactor for upstream sync
* sync chroma-support with upstream via syncbranch patch
* Update sd.py
* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master
2025-04-30 13:46:06 +03:00
comfyanonymous
dbc726f80c
Better vace memory estimation. ( #7875 )
2025-04-29 20:42:00 -04:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. ( #7873 )
2025-04-29 20:28:52 -04:00
patientx
64709ce55c
Merge branch 'comfyanonymous:master' into master
2025-04-29 14:18:45 +03:00
guill
68f0d35296
Add support for VIDEO as a built-in type ( #7844 )
...
* Add basic support for videos as types
This PR adds support for VIDEO as first-class types. In order to avoid
unnecessary costs, VIDEO outputs must implement the `VideoInput` ABC,
but their implementation details can vary. Included are two
implementations of this type which can be returned by other nodes:
* `VideoFromFile` - Created with either a path on disk (as a string) or
a `io.BytesIO` containing the contents of a file in a supported format
(like .mp4). This implementation won't actually load the video unless
necessary. It will also avoid re-encoding when saving if possible.
* `VideoFromComponents` - Created from an image tensor and an optional
audio tensor.
Currently, only h264 encoded videos in .mp4 containers are supported for
saving, but the plan is to add additional encodings/containers in the
near future (particularly .webm).
* Add optimization to avoid parsing entire video
* Improve type declarations to reduce warnings
* Make sure bytesIO objects can be read many times
* Fix a potential issue when saving long videos
* Fix incorrect type annotation
* Add a `LoadVideo` node to make testing easier
* Refactor new types out of the base comfy folder
I've created a new `comfy_api` top-level module. The intention is that
anything within this folder would be covered by semver-style versioning
that would allow custom nodes to rely on them not introducing breaking
changes.
* Fix linting issue
2025-04-29 05:58:00 -04:00
patientx
0aeb958ea5
Merge branch 'comfyanonymous:master' into master
2025-04-29 01:49:37 +03:00
comfyanonymous
83d04717b6
Support HiDream E1 model. ( #7857 )
2025-04-28 15:01:15 -04:00
chaObserv
c15909bb62
CFG++ for gradient estimation sampler ( #7809 )
2025-04-28 13:51:35 -04:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. ( #7856 )
2025-04-28 13:07:21 -04:00
Pam
30159a7fe6
Save v pred zsnr metadata ( #7840 )
2025-04-28 13:03:21 -04:00
patientx
6244dfa1e1
Merge branch 'comfyanonymous:master' into master
2025-04-28 01:13:53 +03:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. ( #7833 )
2025-04-27 05:38:11 -04:00
patientx
ec34f4da57
Merge branch 'comfyanonymous:master' into master
2025-04-27 05:01:55 +03:00
comfyanonymous
ac10a0d69e
Make loras work with --async-offload ( #7824 )
2025-04-26 19:56:22 -04:00
patientx
9cc8e2e1d0
Merge branch 'comfyanonymous:master' into master
2025-04-26 23:32:14 +03:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. ( #7820 )
...
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
patientx
d6238ed3f0
Merge branch 'comfyanonymous:master' into master
2025-04-26 15:31:39 +03:00
comfyanonymous
23e39f2ba7
Add a T5TokenizerOptions node to set options for the T5 tokenizer. ( #7803 )
2025-04-25 19:36:00 -04:00
patientx
57a5e6e7ae
Merge branch 'comfyanonymous:master' into master
2025-04-26 01:23:01 +03:00
AustinMroz
78992c4b25
[NodeDef] Add documentation on widgetType ( #7768 )
...
* [NodeDef] Add documentation on widgetType
* Document required version for widgetType
2025-04-25 13:35:07 -04:00
patientx
224f72f90f
Merge branch 'comfyanonymous:master' into master
2025-04-25 14:11:50 +03:00
comfyanonymous
f935d42d8e
Support SimpleTuner lycoris lora format for HiDream.
2025-04-25 03:11:14 -04:00
patientx
1d9338b4b9
Merge branch 'comfyanonymous:master' into master
2025-04-24 14:50:59 +03:00
thot experiment
e2eed9eb9b
throw away alpha channel in clip vision preprocessor ( #7769 )
...
saves users having to explicitly discard the channel
2025-04-23 21:28:36 -04:00
patientx
d8a75b86e9
Update zluda.py
2025-04-24 03:11:46 +03:00
patientx
91d7a9d234
Merge branch 'comfyanonymous:master' into master
2025-04-23 12:41:27 +03:00
comfyanonymous
552615235d
Fix for dino lowvram. ( #7748 )
2025-04-23 04:12:52 -04:00
Robin Huang
0738e4ea5d
[API nodes] Add backbone for supporting api nodes in ComfyUI ( #7745 )
...
* Add Ideogram generate node.
* Add staging api.
* COMFY_API_NODE_NAME node property
* switch to boolean flag and use original node name for id
* add optional to type
* Add API_NODE and common error for missing auth token (#5 )
* Add Minimax Video Generation + Async Task queue polling example (#6 )
* [Minimax] Show video preview and embed workflow in ouput (#7 )
* [API Nodes] Send empty request body instead of empty dictionary. (#8 )
* Fixed: removed function from rebase.
* Add pydantic.
* Remove uv.lock
* Remove polling operations.
* Update stubs workflow.
* Remove polling comments.
* Update stubs.
* Use pydantic v2.
* Use pydantic v2.
* Add basic OpenAITextToImage node
* Add.
* convert image to tensor.
* Improve types.
* Ruff.
* Push tests.
* Handle multi-form data.
- Don't set content-type for multi-part/form
- Use data field instead of JSON
* Change to api.comfy.org
* Handle error code 409.
* Remove nodes.
---------
Co-authored-by: bymyself <cbyrne@comfy.org>
Co-authored-by: Yoland Y <4950057+yoland68@users.noreply.github.com>
2025-04-23 02:18:08 -04:00
patientx
a397c3aeb3
Merge branch 'comfyanonymous:master' into master
2025-04-22 13:28:29 +03:00
comfyanonymous
2d6805ce57
Add option for using fp8_e8m0fnu for model weights. ( #7733 )
...
Seems to break every model I have tried but worth testing?
2025-04-22 06:17:38 -04:00
patientx
c09fa908f5
Merge branch 'comfyanonymous:master' into master
2025-04-22 12:13:56 +03:00
Kohaku-Blueleaf
a8f63c0d5b
Support dora_scale on both axis ( #7727 )
2025-04-22 05:01:27 -04:00
Kohaku-Blueleaf
966c43ce26
Add OFT/BOFT algorithm in weight adapter ( #7725 )
2025-04-22 04:59:47 -04:00
comfyanonymous
3ab231f01f
Fix issue with WAN VACE implementation. ( #7724 )
2025-04-21 23:36:12 -04:00
Kohaku-Blueleaf
1f3fba2af5
Unified Weight Adapter system for better maintainability and future feature of Lora system ( #7540 )
2025-04-21 20:15:32 -04:00
comfyanonymous
5d0d4ee98a
Add strength control for vace. ( #7717 )
2025-04-21 19:36:20 -04:00
patientx
32ec658779
Merge branch 'comfyanonymous:master' into master
2025-04-21 23:16:54 +03:00
filtered
5d51794607
Add node type hint for socketless option ( #7714 )
...
* Add node type hint for socketless option
* nit - Doc
2025-04-21 16:13:00 -04:00
comfyanonymous
ce22f687cc
Support for WAN VACE preview model. ( #7711 )
...
* Support for WAN VACE preview model.
* Remove print.
2025-04-21 14:40:29 -04:00
patientx
d926896b55
Merge branch 'comfyanonymous:master' into master
2025-04-21 00:00:31 +03:00
comfyanonymous
2c735c13b4
Slightly better fix for #7687
2025-04-20 11:33:27 -04:00
patientx
0143b19681
Merge branch 'comfyanonymous:master' into master
2025-04-20 15:05:36 +03:00
comfyanonymous
fd27494441
Use empty t5 of size 128 for hidream, seems to give closer results.
2025-04-19 19:49:40 -04:00
power88
f43e1d7f41
Hidream: Allow loading hidream text encoders in CLIPLoader and DualCLIPLoader ( #7676 )
...
* Hidream: Allow partial loading text encoders
* reformat code for ruff check.
2025-04-19 19:47:30 -04:00
patientx
cd77ded0c7
Merge branch 'comfyanonymous:master' into master
2025-04-19 23:20:10 +03:00
comfyanonymous
636d4bfb89
Fix hard crash when the spiece tokenizer path is bad.
2025-04-19 15:55:43 -04:00
patientx
682a70bcf9
Update zluda.py
2025-04-19 00:53:59 +03:00
patientx
71ac9830ab
updated comfyui-frontend version
2025-04-17 22:15:11 +03:00
patientx
95773a0045
updated "comfyui-frontend version" - added "comfyui-workflow-templates"
2025-04-17 22:13:36 +03:00
patientx
d9211bbf1e
Merge branch 'comfyanonymous:master' into master
2025-04-17 20:51:12 +03:00
comfyanonymous
dbcfd092a2
Set default context_img_len to 257
2025-04-17 12:42:34 -04:00
comfyanonymous
c14429940f
Support loading WAN FLF model.
2025-04-17 12:04:48 -04:00
patientx
6bbe3b19d4
Merge branch 'comfyanonymous:master' into master
2025-04-17 14:51:08 +03:00
comfyanonymous
0d720e4367
Don't hardcode length of context_img in wan code.
2025-04-17 06:25:39 -04:00
patientx
7950c510e7
Merge branch 'comfyanonymous:master' into master
2025-04-17 10:10:03 +03:00
comfyanonymous
1fc00ba4b6
Make hidream work with any latent resolution.
2025-04-16 18:34:14 -04:00
patientx
d05e9b8298
Merge branch 'comfyanonymous:master' into master
2025-04-17 01:22:11 +03:00
comfyanonymous
9899d187b1
Limit T5 to 128 tokens for HiDream: #7620
2025-04-16 18:07:55 -04:00
comfyanonymous
f00f340a56
Reuse code from flux model.
2025-04-16 17:43:55 -04:00
patientx
503292b3c0
Merge branch 'comfyanonymous:master' into master
2025-04-16 23:26:10 +03:00
Chenlei Hu
cce1d9145e
[Type] Mark input options NotRequired ( #7614 )
2025-04-16 15:41:00 -04:00
patientx
765224ad78
Merge branch 'comfyanonymous:master' into master
2025-04-16 12:10:01 +03:00
comfyanonymous
b4dc03ad76
Fix issue on old torch.
2025-04-16 04:53:56 -04:00
patientx
eee802e685
Update zluda.py
2025-04-16 00:53:18 +03:00
patientx
ad2fa1a675
Merge branch 'comfyanonymous:master' into master
2025-04-16 00:50:26 +03:00
comfyanonymous
9ad792f927
Basic support for hidream i1 model.
2025-04-15 17:35:05 -04:00
patientx
ab0860b96a
Merge branch 'comfyanonymous:master' into master
2025-04-15 21:37:22 +03:00
comfyanonymous
6fc5dbd52a
Cleanup.
2025-04-15 12:13:28 -04:00
patientx
af58afcae8
Merge branch 'comfyanonymous:master' into master
2025-04-15 18:23:31 +03:00
comfyanonymous
3e8155f7a3
More flexible long clip support.
...
Add clip g long clip support.
Text encoder refactor.
Support llama models with different vocab sizes.
2025-04-15 10:32:21 -04:00
patientx
09491fecbd
Merge branch 'comfyanonymous:master' into master
2025-04-15 14:41:15 +03:00
comfyanonymous
8a438115fb
add RMSNorm to comfy.ops
2025-04-14 18:00:33 -04:00
patientx
205bdad97e
fix frontend package version
2025-04-13 17:12:14 +03:00
patientx
b6d5765f0f
Added onnxruntime patching
2025-04-13 17:11:13 +03:00
patientx
f1513262d3
Merge branch 'comfyanonymous:master' into master
2025-04-13 03:21:00 +03:00
chaObserv
e51d9ba5fc
Add SEEDS (stage 2 & 3 DP) sampler ( #7580 )
...
* Add seeds stage 2 & 3 (DP) sampler
* Change the name to SEEDS in comment
2025-04-12 18:36:08 -04:00
catboxanon
1714a4c158
Add CublasOps support ( #7574 )
...
* CublasOps support
* Guard CublasOps behind --fast arg
2025-04-12 18:29:15 -04:00
patientx
81b9fcfe4d
Merge branch 'comfyanonymous:master' into master
2025-04-11 19:55:56 +03:00
Chargeuk
ed945a1790
Dependency Aware Node Caching for low RAM/VRAM machines ( #7509 )
...
* add dependency aware cache that removed a cached node as soon as all of its decendents have executed. This allows users with lower RAM to run workflows they would otherwise not be able to run. The downside is that every workflow will fully run each time even if no nodes have changed.
* remove test code
* tidy code
2025-04-11 06:55:51 -04:00
patientx
163780343e
updated comfyui-frontend version
2025-04-11 00:32:22 +03:00
patientx
27f850ff8a
Merge branch 'comfyanonymous:master' into master
2025-04-11 00:23:38 +03:00
Chenlei Hu
98bdca4cb2
Deprecate InputTypeOptions.defaultInput ( #7551 )
...
* Deprecate InputTypeOptions.defaultInput
* nit
* nit
2025-04-10 06:57:06 -04:00
patientx
ae8488fdc7
Merge branch 'comfyanonymous:master' into master
2025-04-09 19:53:21 +03:00
Jedrzej Kosinski
e346d8584e
Add prepare_sampling wrapper allowing custom nodes to more accurately report noise_shape ( #7500 )
2025-04-09 09:43:35 -04:00
patientx
2bf016391a
Merge branch 'comfyanonymous:master' into master
2025-04-07 13:01:39 +03:00
comfyanonymous
70d7242e57
Support the wan fun reward loras.
2025-04-07 05:01:47 -04:00
patientx
137ab318e1
Merge branch 'comfyanonymous:master' into master
2025-04-05 16:30:54 +03:00
comfyanonymous
3bfe4e5276
Support 512 siglip model.
2025-04-05 07:01:01 -04:00
patientx
c90f1b7948
Merge branch 'comfyanonymous:master' into master
2025-04-05 13:39:32 +03:00
Raphael Walker
89e4ea0175
Add activations_shape info in UNet models ( #7482 )
...
* Add activations_shape info in UNet models
* activations_shape should be a list
2025-04-04 21:27:54 -04:00
comfyanonymous
3a100b9a55
Disable partial offloading of audio VAE.
2025-04-04 21:24:56 -04:00
patientx
4541842b9a
Merge branch 'comfyanonymous:master' into master
2025-04-03 03:15:32 +03:00
BiologicalExplosion
2222cf67fd
MLU memory optimization ( #7470 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-04-02 19:24:04 -04:00
patientx
1040220970
Merge branch 'comfyanonymous:master' into master
2025-04-01 22:56:01 +03:00
BVH
301e26b131
Add option to store TE in bf16 ( #7461 )
2025-04-01 13:48:53 -04:00
patientx
b7d9be6864
Merge branch 'comfyanonymous:master' into master
2025-03-30 14:17:07 +03:00
comfyanonymous
a3100c8452
Remove useless code.
2025-03-29 20:12:56 -04:00
patientx
f02045a45d
Merge branch 'comfyanonymous:master' into master
2025-03-28 16:58:48 +03:00
comfyanonymous
2d17d8910c
Don't error if wan concat image has extra channels.
2025-03-28 08:49:29 -04:00
patientx
01f2da55f9
Update zluda.py
2025-03-28 10:33:39 +03:00
patientx
9d401fe602
Merge branch 'comfyanonymous:master' into master
2025-03-27 22:52:53 +03:00
comfyanonymous
0a1f8869c9
Add WanFunInpaintToVideo node for the Wan fun inpaint models.
2025-03-27 11:13:27 -04:00
patientx
39cf3cdc32
Merge branch 'comfyanonymous:master' into master
2025-03-27 12:35:23 +03:00
comfyanonymous
3661c833bc
Support the WAN 2.1 fun control models.
...
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
patientx
8115bdf68a
Merge branch 'comfyanonymous:master' into master
2025-03-25 22:35:14 +03:00
comfyanonymous
8edc1f44c1
Support more float8 types.
2025-03-25 05:23:49 -04:00
patientx
87e937ecd6
Merge branch 'comfyanonymous:master' into master
2025-03-23 18:32:46 +03:00
comfyanonymous
e471c726e5
Fallback to pytorch attention if sage attention fails.
2025-03-22 15:45:56 -04:00
patientx
64008960e9
Update zluda.py
2025-03-22 13:53:47 +03:00
patientx
a6db9cc07a
Merge branch 'comfyanonymous:master' into master
2025-03-22 13:52:43 +03:00
comfyanonymous
d9fa9d307f
Automatically set the right sampling type for lotus.
2025-03-21 14:19:37 -04:00
thot experiment
83e839a89b
Native LotusD Implementation ( #7125 )
...
* draft pass at a native comfy implementation of Lotus-D depth and normal est
* fix model_sampling kludges
* fix ruff
---------
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
patientx
cf9ad7aae3
Update zluda.py
2025-03-21 12:21:23 +03:00
patientx
28a4de830c
Merge branch 'comfyanonymous:master' into master
2025-03-20 14:23:30 +03:00
comfyanonymous
3872b43d4b
A few fixes for the hunyuan3d models.
2025-03-20 04:52:31 -04:00
comfyanonymous
32ca0805b7
Fix orientation of hunyuan 3d model.
2025-03-19 19:55:24 -04:00
comfyanonymous
11f1b41bab
Initial Hunyuan3Dv2 implementation.
...
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
patientx
4115126a82
Update zluda.py
2025-03-19 12:15:38 +03:00
patientx
34b87d4542
Merge branch 'comfyanonymous:master' into master
2025-03-18 17:14:19 +03:00
comfyanonymous
3b19fc76e3
Allow disabling pe in flux code for some other models.
2025-03-18 05:09:25 -04:00
patientx
c853489349
Merge branch 'comfyanonymous:master' into master
2025-03-18 01:12:05 +03:00
comfyanonymous
50614f1b79
Fix regression with clip vision.
2025-03-17 13:56:11 -04:00
patientx
97dd7b8b52
Merge branch 'comfyanonymous:master' into master
2025-03-17 13:10:34 +03:00
comfyanonymous
6dc7b0bfe3
Add support for giant dinov2 image encoder.
2025-03-17 05:53:54 -04:00
patientx
30e3177e00
Merge branch 'comfyanonymous:master' into master
2025-03-16 21:26:31 +03:00
comfyanonymous
e8e990d6b8
Cleanup code.
2025-03-16 06:29:12 -04:00
patientx
dcc409faa4
Merge branch 'comfyanonymous:master' into master
2025-03-16 13:05:56 +03:00
Jedrzej Kosinski
2e24a15905
Call unpatch_hooks at the start of ModelPatcher.partially_unload ( #7253 )
...
* Call unpatch_hooks at the start of ModelPatcher.partially_unload
* Only call unpatch_hooks in partially_unload if lowvram is possible
2025-03-16 06:02:45 -04:00
chaObserv
fd5297131f
Guard the edge cases of noise term in er_sde ( #7265 )
2025-03-16 06:02:25 -04:00
patientx
bc664959a4
Merge branch 'comfyanonymous:master' into master
2025-03-15 17:11:51 +03:00
comfyanonymous
55a1b09ddc
Allow loading diffusion model files with the "Load Checkpoint" node.
2025-03-15 08:27:49 -04:00
comfyanonymous
3c3988df45
Show a better error message if the VAE is invalid.
2025-03-15 08:26:36 -04:00
comfyanonymous
a2448fc527
Remove useless code.
2025-03-14 18:10:37 -04:00
patientx
977dbafcf6
update the comfyui frontend package version
2025-03-15 00:07:04 +03:00
patientx
ccd9fe7f3c
Merge branch 'comfyanonymous:master' into master
2025-03-14 23:47:51 +03:00
comfyanonymous
6a0daa79b6
Make the SkipLayerGuidanceDIT node work on WAN.
2025-03-14 10:55:19 -04:00
FeepingCreature
9c98c6358b
Tolerate missing @torch.library.custom_op ( #7234 )
...
This can happen on Pytorch versions older than 2.4.
2025-03-14 09:51:26 -04:00
patientx
bcea9b9a0c
Update attention.py to keep older torch versions running
2025-03-14 16:45:32 +03:00
patientx
eaf40b802d
Merge branch 'comfyanonymous:master' into master
2025-03-14 12:00:25 +03:00
FeepingCreature
7aceb9f91c
Add --use-flash-attention flag. ( #7223 )
...
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00
comfyanonymous
35504e2f93
Fix.
2025-03-13 15:03:18 -04:00
comfyanonymous
299436cfed
Print mac version.
2025-03-13 10:05:40 -04:00
patientx
b57a624e7c
Merge branch 'comfyanonymous:master' into master
2025-03-13 03:13:15 +03:00
Chenlei Hu
9b6cd9b874
[NodeDef] Add documentation on multi_select input option ( #7212 )
2025-03-12 17:29:39 -04:00
chaObserv
3fc688aebd
Ensure the extra_args in dpmpp sde series ( #7204 )
2025-03-12 17:28:59 -04:00
patientx
4a632a54a4
Merge branch 'comfyanonymous:master' into master
2025-03-12 11:28:56 +03:00
chaObserv
01015bff16
Add er_sde sampler ( #7187 )
2025-03-12 02:42:37 -04:00
patientx
cf490d92b3
Merge branch 'comfyanonymous:master' into master
2025-03-11 01:28:12 +03:00
comfyanonymous
ca8efab79f
Support control loras on Wan.
2025-03-10 17:23:13 -04:00
patientx
c469113159
Merge branch 'comfyanonymous:master' into master
2025-03-09 14:09:50 +03:00
comfyanonymous
9aac21f894
Fix issues with new hunyuan img2vid model and bumb version to v0.3.26
2025-03-09 05:07:22 -04:00
Jedrzej Kosinski
528d1b3563
When cached_hook_patches contain weights for hooks, only use hook_backup for unused keys ( #7067 )
2025-03-09 04:26:31 -04:00
comfyanonymous
7395b0c0d1
Support new hunyuan video i2v model.
...
Use the new "v2 (replace)" guidance type in HunyuanImageToVideo and set
image_interleave to 4 on the "Text Encode Hunyuan Video" node.
2025-03-08 20:34:47 -05:00
comfyanonymous
0952569493
Fix stable cascade VAE on some lowvram machines.
2025-03-08 20:24:04 -05:00
patientx
b8ad97eca2
Update zluda.py
2025-03-08 14:57:09 +03:00
patientx
0bf4f88dea
Update zluda.py
2025-03-08 14:37:15 +03:00
patientx
2ce9177547
Update zluda.py
2025-03-08 14:33:15 +03:00
patientx
0f7de5c588
Update zluda.py
2025-03-08 14:23:08 +03:00
patientx
d385e286a1
Update zluda.py
2025-03-08 14:22:23 +03:00
patientx
09156b577c
Merge branch 'comfyanonymous:master' into master
2025-03-08 14:21:36 +03:00
comfyanonymous
be4e760648
Add an image_interleave option to the Hunyuan image to video encode node.
...
See the tooltip for what it does.
2025-03-07 19:56:26 -05:00
patientx
8847252eec
Merge branch 'comfyanonymous:master' into master
2025-03-07 16:14:21 +03:00
comfyanonymous
11b1f27cb1
Set WAN default compute dtype to fp16.
2025-03-07 04:52:36 -05:00
comfyanonymous
70e15fd743
No need for scale_input when fp8 matrix mult is disabled.
2025-03-07 04:49:20 -05:00
comfyanonymous
e1474150de
Support fp8_scaled diffusion models that don't use fp8 matrix mult.
2025-03-07 04:39:21 -05:00
patientx
7d16d06c3b
Merge branch 'comfyanonymous:master' into master
2025-03-06 23:42:01 +03:00
JettHu
e62d72e8ca
Typo in node_typing.py ( #7092 )
2025-03-06 15:24:04 -05:00
patientx
03825eaa28
Merge branch 'comfyanonymous:master' into master
2025-03-06 22:23:13 +03:00
comfyanonymous
dfa36e6855
Fix some things breaking when embeddings fail to apply.
2025-03-06 13:31:55 -05:00
patientx
44c060b3de
Merge branch 'comfyanonymous:master' into master
2025-03-06 12:16:31 +03:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
comfyanonymous
0bef826a98
Support llava clip vision model.
2025-03-06 00:24:43 -05:00
comfyanonymous
85ef295069
Make applying embeddings more efficient.
...
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
patientx
199e91029d
Merge branch 'comfyanonymous:master' into master
2025-03-05 23:54:19 +03:00
Chenlei Hu
5d84607bf3
Add type hint for FileLocator ( #6968 )
...
* Add type hint for FileLocator
* nit
2025-03-05 15:35:26 -05:00
Silver
c1909f350f
Better argument handling of front-end-root ( #7043 )
...
* Better argument handling of front-end-root
Improves handling of front-end-root launch argument. Several instances where users have set it and ComfyUI launches as normal and completely disregards the launch arg which doesn't make sense. Better to indicate to user that something is incorrect.
* Removed unused import
There was no real reason to use "Optional" typing in ther front-end-root argument.
2025-03-05 15:34:22 -05:00
Chenlei Hu
52b3469606
[NodeDef] Explicitly add control_after_generate to seed/noise_seed ( #7059 )
...
* [NodeDef] Explicitly add control_after_generate to seed/noise_seed
* Update comfy/comfy_types/node_typing.py
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
---------
Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
2025-03-05 15:33:23 -05:00
patientx
c36a942c12
Merge branch 'comfyanonymous:master' into master
2025-03-05 19:11:25 +03:00
comfyanonymous
369b079ff6
Fix lowvram issue with ltxv vae.
2025-03-05 05:26:08 -05:00
comfyanonymous
9c9a7f012a
Adjust ltxv memory factor.
2025-03-05 05:16:05 -05:00
patientx
93001919fa
Merge branch 'comfyanonymous:master' into master
2025-03-05 11:17:58 +03:00
comfyanonymous
93fedd92fe
Support LTXV 0.9.5.
...
Credits: Lightricks team.
2025-03-05 00:13:49 -05:00
patientx
586eabb95d
Merge branch 'comfyanonymous:master' into master
2025-03-04 19:10:54 +03:00
comfyanonymous
65042f7d39
Make it easier to set a custom template for hunyuan video.
2025-03-04 09:26:05 -05:00
patientx
cacb8da101
Merge branch 'comfyanonymous:master' into master
2025-03-04 13:17:37 +03:00
comfyanonymous
7c7c70c400
Refactor skyreels i2v code.
2025-03-04 00:15:45 -05:00
patientx
f12bcde392
Merge branch 'comfyanonymous:master' into master
2025-03-03 15:10:56 +03:00
comfyanonymous
f86c724ef2
Temporal area composition.
...
New ConditioningSetAreaPercentageVideo node.
2025-03-03 06:50:31 -05:00
patientx
6c34d0a58a
Update zluda.py
2025-03-02 22:43:11 +03:00
patientx
7763bf823e
Merge branch 'comfyanonymous:master' into master
2025-03-02 17:18:12 +03:00
comfyanonymous
9af6320ec9
Make 2d area composition nodes work on video models.
2025-03-02 08:19:16 -05:00
patientx
0c4bebf5fb
Merge branch 'comfyanonymous:master' into master
2025-03-01 14:59:20 +03:00
comfyanonymous
4dc6709307
Rename argument in last commit and document the options.
2025-03-01 02:43:49 -05:00
Chenlei Hu
4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
patientx
c235a51d82
Update zluda.py
2025-02-28 16:41:56 +03:00
patientx
af43425ab5
Update model_management.py
2025-02-28 16:37:55 +03:00
patientx
1871a594ba
Merge branch 'comfyanonymous:master' into master
2025-02-28 11:47:19 +03:00
comfyanonymous
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
comfyanonymous
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
comfyanonymous
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
patientx
cc65ca4c42
Merge branch 'comfyanonymous:master' into master
2025-02-28 00:39:40 +03:00
comfyanonymous
f4dac8ab6f
Wan code small cleanup.
2025-02-27 07:22:42 -05:00
patientx
c4fb9f2a63
Merge branch 'comfyanonymous:master' into master
2025-02-27 13:06:17 +03:00
BiologicalExplosion
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
comfyanonymous
3ea3bc8546
Fix wan issues when prompt length is long.
2025-02-26 20:34:02 -05:00
patientx
b04a1f4127
Merge branch 'comfyanonymous:master' into master
2025-02-27 01:24:29 +03:00
comfyanonymous
0270a0b41c
Reduce artifacts on Wan by doing the patch embedding in fp32.
2025-02-26 16:59:26 -05:00
patientx
4f968c3c56
Merge branch 'comfyanonymous:master' into master
2025-02-26 17:11:50 +03:00
comfyanonymous
c37f15f98e
Add fast preview support for Wan models.
2025-02-26 08:56:23 -05:00
patientx
1193f3fbb1
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:47:39 +03:00
comfyanonymous
4bca7367f3
Don't try to use clip_fea on t2v model.
2025-02-26 08:38:09 -05:00
patientx
debf69185c
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:00:33 +03:00
comfyanonymous
b6fefe686b
Better wan memory estimation.
2025-02-26 07:51:22 -05:00
patientx
583f140eda
Merge branch 'comfyanonymous:master' into master
2025-02-26 13:26:25 +03:00
comfyanonymous
fa62287f1f
More code reuse in wan.
...
Fix bug when changing the compute dtype on wan.
2025-02-26 05:22:29 -05:00
patientx
743996a1f7
Merge branch 'comfyanonymous:master' into master
2025-02-26 12:56:06 +03:00
comfyanonymous
0844998db3
Slightly better wan i2v mask implementation.
2025-02-26 03:49:50 -05:00
comfyanonymous
4ced06b879
WIP support for Wan I2V model.
2025-02-26 01:49:43 -05:00
patientx
1e91ff59a1
Merge branch 'comfyanonymous:master' into master
2025-02-26 09:24:15 +03:00
comfyanonymous
cb06e9669b
Wan seems to work with fp16.
2025-02-25 21:37:12 -05:00
patientx
6e894524e2
Merge branch 'comfyanonymous:master' into master
2025-02-26 04:14:10 +03:00
comfyanonymous
9a66bb972d
Make wan work with all latent resolutions.
...
Cleanup some code.
2025-02-25 19:56:04 -05:00
patientx
4269943ac3
Merge branch 'comfyanonymous:master' into master
2025-02-26 03:13:47 +03:00
comfyanonymous
ea0f939df3
Fix issue with wan and other attention implementations.
2025-02-25 19:13:39 -05:00
comfyanonymous
f37551c1d2
Change wan rope implementation to the flux one.
...
Should be more compatible.
2025-02-25 19:11:14 -05:00
patientx
879db7bdfc
Merge branch 'comfyanonymous:master' into master
2025-02-26 02:07:25 +03:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
patientx
6cf0fdcc3c
Merge branch 'comfyanonymous:master' into master
2025-02-25 17:12:14 +03:00
comfyanonymous
f40076096e
Cleanup some lumina te code.
2025-02-25 04:10:26 -05:00
patientx
d705fe2e0b
Merge branch 'comfyanonymous:master' into master
2025-02-24 13:42:27 +03:00
comfyanonymous
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
patientx
8142770e5f
Merge branch 'comfyanonymous:master' into master
2025-02-23 14:51:43 +03:00
comfyanonymous
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
patientx
c15fe75f7b
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic" ,error fix
2025-02-22 15:44:20 +03:00
patientx
26eb98b96f
Merge branch 'comfyanonymous:master' into master
2025-02-22 14:42:22 +03:00
comfyanonymous
aff16532d4
Remove some useless code.
2025-02-22 04:45:14 -05:00
comfyanonymous
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
comfyanonymous
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
patientx
059397437b
Merge branch 'comfyanonymous:master' into master
2025-02-21 23:25:43 +03:00
comfyanonymous
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
patientx
603cacb14a
Merge branch 'comfyanonymous:master' into master
2025-02-20 23:06:56 +03:00
comfyanonymous
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
Silver
c5be423d6b
Fix link pointing to non-exisiting docs ( #6891 )
...
* Fix link pointing to non-exisiting docs
The current link is pointing to a path that does not exist any longer.
I changed it to point to the currect correct path for custom nodes datatypes.
* Update node_typing.py
2025-02-20 07:07:07 -05:00
patientx
a813e39d54
Merge branch 'comfyanonymous:master' into master
2025-02-19 16:19:26 +03:00
maedtb
5715be2ca9
Fix Hunyuan unet config detection for some models. ( #6877 )
...
The change to support 32 channel hunyuan models is missing the `key_prefix` on the key.
This addresses a complain in the comments of acc152b674 .
2025-02-19 07:14:45 -05:00
patientx
4e6a5fc548
Merge branch 'comfyanonymous:master' into master
2025-02-19 13:52:34 +03:00
bymyself
afc85cdeb6
Add Load Image Output node ( #6790 )
...
* add LoadImageOutput node
* add route for input/output/temp files
* update node_typing.py
* use literal type for image_folder field
* mark node as beta
2025-02-18 17:53:01 -05:00
Jukka Seppänen
acc152b674
Support loading and using SkyReels-V1-Hunyuan-I2V ( #6862 )
...
* Support SkyReels-V1-Hunyuan-I2V
* VAE scaling
* Fix T2V
oops
* Proper latent scaling
2025-02-18 17:06:54 -05:00
patientx
3bde94efbb
Merge branch 'comfyanonymous:master' into master
2025-02-18 16:27:10 +03:00
comfyanonymous
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
patientx
ef2e97356e
Merge branch 'comfyanonymous:master' into master
2025-02-17 14:04:15 +03:00
comfyanonymous
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
comfyanonymous
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
patientx
e8bf0ca27f
Merge branch 'comfyanonymous:master' into master
2025-02-17 12:42:02 +03:00
comfyanonymous
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
patientx
45c55f6cd0
Merge branch 'comfyanonymous:master' into master
2025-02-16 14:37:41 +03:00
comfyanonymous
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
patientx
6f05ace055
Merge branch 'comfyanonymous:master' into master
2025-02-14 13:50:23 +03:00
comfyanonymous
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
patientx
f125a37bdf
Update model_management.py
2025-02-14 12:33:27 +03:00
patientx
99d2824d5a
Update model_management.py
2025-02-14 12:30:19 +03:00
comfyanonymous
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
patientx
4d66aa9709
Merge branch 'comfyanonymous:master' into master
2025-02-14 11:00:12 +03:00
comfyanonymous
019c7029ea
Add a way to set a different compute dtype for the model at runtime.
...
Currently only works for diffusion models.
2025-02-13 20:34:03 -05:00
patientx
bce4176d3d
fixes to use pytorch-attention
2025-02-13 19:17:35 +03:00
patientx
f9ee02080f
Merge branch 'comfyanonymous:master' into master
2025-02-13 16:37:24 +03:00
comfyanonymous
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
patientx
e4448cf48e
Merge branch 'comfyanonymous:master' into master
2025-02-12 15:50:23 +03:00
comfyanonymous
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
zhoufan2956
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
comfyanonymous
ab888e1e0b
Add add_weight_wrapper function to model patcher.
...
Functions can now easily be added to wrap/modify model weights.
2025-02-12 05:55:35 -05:00
comfyanonymous
d9f0fcdb0c
Cleanup.
2025-02-11 17:17:03 -05:00
HishamC
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
patientx
196fc385e1
Merge branch 'comfyanonymous:master' into master
2025-02-11 16:38:17 +03:00
comfyanonymous
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
patientx
2a0bc66fed
Merge branch 'comfyanonymous:master' into master
2025-02-10 15:41:15 +03:00
comfyanonymous
4027466c80
Make lumina model work with any latent resolution.
2025-02-10 00:24:20 -05:00
patientx
9561b03cc7
Merge branch 'comfyanonymous:master' into master
2025-02-09 15:33:03 +03:00
comfyanonymous
095d867147
Remove useless function.
2025-02-09 07:02:57 -05:00
Pam
caeb27c3a5
res_multistep: Fix cfgpp and add ancestral samplers ( #6731 )
2025-02-08 19:39:58 -05:00
comfyanonymous
3d06e1c555
Make error more clear to user.
2025-02-08 18:57:24 -05:00
catboxanon
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
patientx
9a9da027b2
Merge branch 'comfyanonymous:master' into master
2025-02-07 12:02:36 +03:00
comfyanonymous
079eccc92a
Don't compress http response by default.
...
Remove argument to disable it.
Add new --enable-compress-response-body argument to enable it.
2025-02-07 03:29:21 -05:00
patientx
4a1e3ee925
Merge branch 'comfyanonymous:master' into master
2025-02-06 14:33:29 +03:00
comfyanonymous
14880e6dba
Remove some useless code.
2025-02-06 05:00:37 -05:00