Commit Graph

2179 Commits

Author SHA1 Message Date
comfyanonymous
ef5266b1c1
Support Flux Kontext Dev model. (#8679) 2025-06-26 11:28:41 -04:00
patientx
c1ce148f41
Merge branch 'comfyanonymous:master' into master 2025-06-26 11:34:14 +03:00
comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. (#8672) 2025-06-26 03:39:09 -04:00
patientx
7db0737e86
Merge branch 'comfyanonymous:master' into master 2025-06-26 02:41:07 +03:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. (#8669) 2025-06-25 19:35:57 -04:00
patientx
8ad7604b12
Add files via upload 2025-06-26 01:45:32 +03:00
patientx
65ff46f01e
Merge branch 'comfyanonymous:master' into master 2025-06-25 12:14:05 +03:00
comfyanonymous
7a13f74220
unet -> diffusion model (#8659) 2025-06-25 04:52:34 -04:00
patientx
71a47f608a
Merge branch 'comfyanonymous:master' into master 2025-06-24 23:49:15 +03:00
chaObserv
8042eb20c6
Singlestep DPM++ SDE for RF (#8627)
Refactor the algorithm, and apply alpha scaling.
2025-06-24 14:59:09 -04:00
patientx
2bf05370ad
Merge branch 'comfyanonymous:master' into master 2025-06-21 14:54:22 +03:00
comfyanonymous
1883e70b43
Fix exception when using a noise mask with cosmos predict2. (#8621)
* Fix exception when using a noise mask with cosmos predict2.

* Fix ruff.
2025-06-21 03:30:39 -04:00
patientx
0e967d11b1
Merge branch 'comfyanonymous:master' into master 2025-06-20 16:18:48 +03:00
comfyanonymous
f7fb193712
Small flux optimization. (#8611) 2025-06-20 05:37:32 -04:00
patientx
73dfb38f5b
Merge branch 'comfyanonymous:master' into master 2025-06-20 09:44:52 +03:00
comfyanonymous
7e9267fa77
Make flux controlnet work with sd3 text enc. (#8599) 2025-06-19 18:50:05 -04:00
patientx
c46cc59288
Merge branch 'comfyanonymous:master' into master 2025-06-19 19:02:07 +03:00
comfyanonymous
91d40086db
Fix pytorch warning. (#8593) 2025-06-19 11:04:52 -04:00
patientx
e34ff57933
Add files via upload 2025-06-17 00:09:40 +03:00
patientx
4339cbea46
Merge branch 'comfyanonymous:master' into master 2025-06-16 22:21:35 +03:00
chaObserv
8e81c507d2
Multistep DPM++ SDE samplers for RF (#8541)
Include alpha in sampling and minor refactoring
2025-06-16 14:47:10 -04:00
comfyanonymous
e1c6dc720e
Allow setting min_length with tokenizer_data. (#8547) 2025-06-16 13:43:52 -04:00
patientx
e8327ffc3d
Merge branch 'comfyanonymous:master' into master 2025-06-15 21:42:40 +03:00
comfyanonymous
7ea79ebb9d
Add correct eps to ltxv rmsnorm. (#8542) 2025-06-15 12:21:25 -04:00
patientx
031f2f5120
Merge branch 'comfyanonymous:master' into master 2025-06-15 16:23:38 +03:00
comfyanonymous
d6a2137fc3
Support Cosmos predict2 image to video models. (#8535)
Use the CosmosPredict2ImageToVideoLatent node.
2025-06-14 21:37:07 -04:00
patientx
132d593223
Merge branch 'comfyanonymous:master' into master 2025-06-15 03:01:50 +03:00
chaObserv
53e8d8193c
Generalize SEEDS samplers (#8529)
Restore VP algorithm for RF and refactor noise_coeffs and half-logSNR calculations
2025-06-14 16:58:16 -04:00
patientx
8909c12ea9
Update model_management.py 2025-06-14 13:27:49 +03:00
patientx
8efd441de9
Merge branch 'comfyanonymous:master' into master 2025-06-14 12:43:19 +03:00
comfyanonymous
29596bd53f
Small cosmos attention code refactor. (#8530) 2025-06-14 05:02:05 -04:00
Kohaku-Blueleaf
520eb77b72
LoRA Trainer: LoRA training node in weight adapter scheme (#8446) 2025-06-13 19:25:59 -04:00
patientx
e3720cd495
Merge branch 'comfyanonymous:master' into master 2025-06-13 15:04:05 +03:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. (#8518) 2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. (#8517) 2025-06-13 07:05:23 -04:00
patientx
a5e9b6729c
Update zluda.py 2025-06-13 01:40:34 +03:00
patientx
7bd5bcd135
Update zluda-default.py 2025-06-13 01:40:11 +03:00
patientx
c06a15d8a5
Update zluda.py 2025-06-13 01:03:45 +03:00
patientx
896bda9003
Update zluda-default.py 2025-06-13 01:03:14 +03:00
patientx
7d5f4074b6
Add files via upload 2025-06-12 16:10:32 +03:00
patientx
79dda39260
Update zluda.py 2025-06-12 13:20:24 +03:00
patientx
08784dc90d
Update zluda.py 2025-06-12 13:19:59 +03:00
patientx
11af025690
Update zluda.py 2025-06-12 13:11:03 +03:00
patientx
828b7636d0
Update zluda.py 2025-06-12 13:10:40 +03:00
patientx
f53791d5d2
Merge branch 'comfyanonymous:master' into master 2025-06-12 00:32:55 +03:00
pythongosssss
50c605e957
Add support for sqlite database (#8444)
* Add support for sqlite database

* fix
2025-06-11 16:43:39 -04:00
comfyanonymous
8a4ff747bd
Fix mistake in last commit. (#8496)
* Move to right place.
2025-06-11 15:13:29 -04:00
comfyanonymous
af1eb58be8
Fix black images on some flux models in fp16. (#8495) 2025-06-11 15:09:11 -04:00
patientx
06ac233007
Merge branch 'comfyanonymous:master' into master 2025-06-10 20:34:42 +03:00
comfyanonymous
6e28a46454
Apple most likely is never fixing the fp16 attention bug. (#8485) 2025-06-10 13:06:24 -04:00
patientx
4bc3866c67
Merge branch 'comfyanonymous:master' into master 2025-06-09 21:10:00 +03:00
comfyanonymous
7f800d04fa
Enable AMD fp8 and pytorch attention on some GPUs. (#8474)
Information is from the pytorch source code.
2025-06-09 12:50:39 -04:00
patientx
b4d015f5f3
Merge branch 'comfyanonymous:master' into master 2025-06-08 21:21:41 +03:00
comfyanonymous
97755eed46
Enable fp8 ops by default on gfx1201 (#8464) 2025-06-08 14:15:34 -04:00
patientx
156aedd995
Merge branch 'comfyanonymous:master' into master 2025-06-07 19:30:45 +03:00
comfyanonymous
daf9d25ee2
Cleaner torch version comparisons. (#8453) 2025-06-07 10:01:15 -04:00
patientx
d28b4525b3
Merge branch 'comfyanonymous:master' into master 2025-06-06 17:10:34 +03:00
comfyanonymous
3b4b171e18
Alternate fix for #8435 (#8442) 2025-06-06 09:43:27 -04:00
patientx
67fc8e3325
Merge branch 'comfyanonymous:master' into master 2025-06-06 01:42:57 +03:00
comfyanonymous
4248b1618f
Let chroma TE work on regular flux. (#8429) 2025-06-05 10:07:17 -04:00
patientx
9aeff135b2
Update zluda.py 2025-06-02 02:55:19 +03:00
patientx
803f82189a
Merge branch 'comfyanonymous:master' into master 2025-06-01 17:44:48 +03:00
comfyanonymous
fb4754624d
Make the casting in lists the same as regular inputs. (#8373) 2025-06-01 05:39:54 -04:00
comfyanonymous
19e45e9b0e
Make it easier to pass lists of tensors to models. (#8358) 2025-05-31 20:00:20 -04:00
patientx
d74ffb792a
Merge branch 'comfyanonymous:master' into master 2025-05-31 01:55:42 +03:00
drhead
08b7cc7506
use fused multiply-add pointwise ops in chroma (#8279) 2025-05-30 18:09:54 -04:00
patientx
07b8d211e6
Merge branch 'comfyanonymous:master' into master 2025-05-30 23:48:15 +03:00
comfyanonymous
704fc78854
Put ROCm version in tuple to make it easier to enable stuff based on it. (#8348) 2025-05-30 15:41:02 -04:00
patientx
c74742444d
Merge branch 'comfyanonymous:master' into master 2025-05-29 18:29:06 +03:00
comfyanonymous
f2289a1f59
Delete useless file. (#8327) 2025-05-29 08:29:37 -04:00
patientx
46a997fb23
Merge branch 'comfyanonymous:master' into master 2025-05-29 10:56:01 +03:00
comfyanonymous
5e5e46d40c
Not really tested WAN Phantom Support. (#8321) 2025-05-28 23:46:15 -04:00
comfyanonymous
1c1687ab1c
Support HiDream SimpleTuner loras. (#8318) 2025-05-28 18:47:15 -04:00
patientx
5b5165371e
Merge branch 'comfyanonymous:master' into master 2025-05-28 01:06:44 +03:00
comfyanonymous
06c661004e
Memory estimation code can now take into account conds. (#8307) 2025-05-27 15:09:05 -04:00
patientx
8609a6dced
Merge branch 'comfyanonymous:master' into master 2025-05-27 01:03:35 +03:00
comfyanonymous
89a84e32d2
Disable initial GPU load when novram is used. (#8294) 2025-05-26 16:39:27 -04:00
patientx
bbcb33ea72
Merge branch 'comfyanonymous:master' into master 2025-05-26 16:26:39 +03:00
comfyanonymous
e5799c4899
Enable pytorch attention by default on AMD gfx1151 (#8282) 2025-05-26 04:29:25 -04:00
patientx
48bbdd0842
Merge branch 'comfyanonymous:master' into master 2025-05-25 15:38:51 +03:00
comfyanonymous
a0651359d7
Return proper error if diffusion model not detected properly. (#8272) 2025-05-25 05:28:11 -04:00
patientx
9790aaac7b
Merge branch 'comfyanonymous:master' into master 2025-05-24 14:00:54 +03:00
comfyanonymous
5a87757ef9
Better error if sageattention is installed but a dependency is missing. (#8264) 2025-05-24 06:43:12 -04:00
patientx
3b69a08c08
Merge branch 'comfyanonymous:master' into master 2025-05-24 04:06:28 +03:00
comfyanonymous
0b50d4c0db
Add argument to explicitly enable fp8 compute support. (#8257)
This can be used to test if your current GPU/pytorch version supports fp8 matrix mult in combination with --fast or the fp8_e4m3fn_fast dtype.
2025-05-23 17:43:50 -04:00
patientx
3e49c3e2ff
Merge branch 'comfyanonymous:master' into master 2025-05-24 00:01:56 +03:00
drhead
30b2eb8a93
create arange on-device (#8255) 2025-05-23 16:15:06 -04:00
patientx
c653935b37
Merge branch 'comfyanonymous:master' into master 2025-05-23 13:57:11 +03:00
LuXuxue
dc4958db54
add some architectures to utils.py 2025-05-23 13:54:03 +08:00
comfyanonymous
f85c08df06
Make VACE conditionings stackable. (#8240) 2025-05-22 19:22:26 -04:00
comfyanonymous
87f9130778
Revert "This doesn't seem to be needed on chroma. (#8209)" (#8210)
This reverts commit 7e84bf5373.
2025-05-20 05:39:55 -04:00
comfyanonymous
7e84bf5373
This doesn't seem to be needed on chroma. (#8209) 2025-05-20 05:29:23 -04:00
patientx
acf83b60f4
Merge branch 'comfyanonymous:master' into master 2025-05-18 11:37:17 +03:00
comfyanonymous
aee2908d03
Remove useless log. (#8166) 2025-05-17 06:27:34 -04:00
patientx
de4e3dd19a
Merge branch 'comfyanonymous:master' into master 2025-05-16 02:43:05 +03:00
comfyanonymous
1c2d45d2b5
Fix typo in last PR. (#8144)
More robust model detection for future proofing.
2025-05-15 19:02:19 -04:00
George0726
c820ef950d
Add Wan-FUN Camera Control models and Add WanCameraImageToVideo node (#8013)
* support wan camera models

* fix by ruff check

* change camera_condition type; make camera_condition optional

* support camera trajectory nodes

* fix camera direction

---------

Co-authored-by: Qirui Sun <sunqr0667@126.com>
2025-05-15 19:00:43 -04:00
patientx
1cf25c6980
Add files via upload 2025-05-15 13:55:15 +03:00
patientx
44cac886c4
Create quant_per_block.py 2025-05-15 13:54:47 +03:00
patientx
01aae8eddc
Create fwd_prefill.py 2025-05-15 13:52:35 +03:00
patientx
39279bda97
Merge branch 'comfyanonymous:master' into master 2025-05-14 14:15:49 +03:00
Christian Byrne
98ff01e148
Display progress and result URL directly on API nodes (#8102)
* [Luma] Print download URL of successful task result directly on nodes (#177)

[Veo] Print download URL of successful task result directly on nodes (#184)

[Recraft] Print download URL of successful task result directly on nodes (#183)

[Pixverse] Print download URL of successful task result directly on nodes (#182)

[Kling] Print download URL of successful task result directly on nodes (#181)

[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179)

[Docs] Link to docs in `API_NODE` class property type annotation comment (#178)

[Ideogram] Print download URL of successful task result directly on nodes (#176)

[Kling] Print download URL of successful task result directly on nodes (#181)

[Veo] Print download URL of successful task result directly on nodes (#184)

[Recraft] Print download URL of successful task result directly on nodes (#183)

[Pixverse] Print download URL of successful task result directly on nodes (#182)

[MiniMax] Print progress text and download URL of successful task result directly on nodes (#179)

[Docs] Link to docs in `API_NODE` class property type annotation comment (#178)

[Luma] Print download URL of successful task result directly on nodes (#177)

[Ideogram] Print download URL of successful task result directly on nodes (#176)

Show output URL and progress text on Pika nodes (#168)

[BFL] Print download URL of successful task result directly on nodes (#175)

[OpenAI ] Print download URL of successful task result directly on nodes (#174)

* fix ruff errors

* fix 3.10 syntax error
2025-05-14 00:33:18 -04:00
patientx
c5ecbe5b30
Merge branch 'comfyanonymous:master' into master 2025-05-14 00:08:44 +03:00
comfyanonymous
4a9014e201
Hunyuan Custom initial untested implementation. (#8101) 2025-05-13 15:53:47 -04:00
patientx
3609a0cf35
Update zluda.py 2025-05-13 18:57:09 +03:00
patientx
8ced886a3d
Update zluda.py 2025-05-13 16:23:24 +03:00
patientx
6fc773788f
updated how to handle comfyui package updates 2025-05-13 16:22:44 +03:00
patientx
f0127d6326
Merge branch 'comfyanonymous:master' into master 2025-05-13 16:00:32 +03:00
patientx
d934498f60
Update rmsnorm.py 2025-05-13 16:00:26 +03:00
comfyanonymous
a814f2e8cc
Fix issue with old pytorch RMSNorm. (#8095) 2025-05-13 07:54:28 -04:00
comfyanonymous
481732a0ed
Support official ACE Step loras. (#8094) 2025-05-13 07:32:16 -04:00
patientx
d15ce39530
Merge branch 'comfyanonymous:master' into master 2025-05-13 01:06:19 +03:00
patientx
05d6c876ad
Update zluda.py 2025-05-13 01:06:08 +03:00
patientx
6aa9077f3c
Update zluda.py 2025-05-13 01:05:49 +03:00
comfyanonymous
640c47e7de
Fix torch warning about deprecated function. (#8075)
Drop support for torch versions below 2.2 on the audio VAEs.
2025-05-12 14:32:01 -04:00
patientx
cd7eb9bd36
"Boolean value of Tensor with more than one value is ambiguous" fix 2025-05-11 20:39:42 +03:00
patientx
8abcc4ec4f
Update zluda.py 2025-05-11 16:51:43 +03:00
patientx
3f58f50df3
Update zluda.py 2025-05-11 16:51:06 +03:00
patientx
6098615cda
Merge branch 'comfyanonymous:master' into master 2025-05-11 15:10:21 +03:00
comfyanonymous
577de83ca9
ACE VAE works in fp16. (#8055) 2025-05-11 04:58:00 -04:00
patientx
515bf966b2
Merge branch 'comfyanonymous:master' into master 2025-05-10 14:57:16 +03:00
comfyanonymous
d42613686f
Fix issue with fp8 ops on some models. (#8045)
_scaled_mm errors when an input is non contiguous.
2025-05-10 07:52:56 -04:00
patientx
2862921cca
Merge branch 'comfyanonymous:master' into master 2025-05-10 14:17:24 +03:00
Pam
1b3bf0a5da
Fix res_multistep_ancestral sampler (#8030) 2025-05-09 20:14:13 -04:00
patientx
56461e2e90
Merge branch 'comfyanonymous:master' into master 2025-05-09 22:40:55 +03:00
blepping
42da274717
Use normal ComfyUI attention in ACE-Steps model (#8023)
* Use normal ComfyUI attention in ACE-Steps model

* Let optimized_attention handle output reshape for ACE
2025-05-09 13:51:02 -04:00
patientx
4672537a99
Merge branch 'comfyanonymous:master' into master 2025-05-09 12:39:13 +03:00
comfyanonymous
8ab15c863c
Add --mmap-torch-files to enable use of mmap when loading ckpt/pt (#8021) 2025-05-09 04:52:47 -04:00
patientx
8a3424f354
Update zluda.py 2025-05-09 00:17:15 +03:00
patientx
a59deda664
Update zluda.py 2025-05-09 00:16:37 +03:00
patientx
9444d18408
Update zluda.py 2025-05-08 23:44:22 +03:00
patientx
184c8521cb
Update zluda.py 2025-05-08 23:44:05 +03:00
patientx
6c2370a577
Update zluda.py 2025-05-08 20:18:26 +03:00
patientx
0436293d99
Update zluda.py 2025-05-08 20:17:58 +03:00
patientx
81a16eefbc
Update zluda.py 2025-05-08 19:57:59 +03:00
patientx
f9671afff0
Update zluda.py 2025-05-08 19:57:28 +03:00
patientx
a963fba415
Update utils.py 2025-05-08 15:33:45 +03:00
patientx
e973632f11
Merge branch 'comfyanonymous:master' into master 2025-05-08 14:52:54 +03:00
comfyanonymous
a692c3cca4
Make ACE VAE tiling work. (#8004) 2025-05-08 07:25:45 -04:00
comfyanonymous
5d3cc85e13
Make japanese hiragana and katakana characters work with ACE. (#7997) 2025-05-08 03:32:36 -04:00
comfyanonymous
c7c025b8d1
Adjust memory estimation code for ACE VAE. (#7990) 2025-05-08 01:22:23 -04:00
comfyanonymous
fd08e39588
Make torchaudio not a hard requirement. (#7987)
Some platforms can't install it apparently so if it's not there it should
only break models that actually use it.
2025-05-07 21:37:12 -04:00
comfyanonymous
56b6ee6754
Detection code to make ltxv models without config work. (#7986) 2025-05-07 21:28:24 -04:00
patientx
a0656dad3a
Merge branch 'comfyanonymous:master' into master 2025-05-08 02:56:22 +03:00
comfyanonymous
cc33cd3422
Experimental lyrics strength for ACE. (#7984) 2025-05-07 19:22:07 -04:00
patientx
326e041c11
Merge branch 'comfyanonymous:master' into master 2025-05-07 17:44:36 +03:00
comfyanonymous
16417b40d9
Initial ACE-Step model implementation. (#7972) 2025-05-07 08:33:34 -04:00
patientx
5edeb23260
Merge branch 'comfyanonymous:master' into master 2025-05-06 19:08:07 +03:00
comfyanonymous
271c9c5b9e
Better mem estimation for the LTXV 13B model. (#7963) 2025-05-06 09:52:37 -04:00
comfyanonymous
a4e679765e
Change chroma to use Flux shift. (#7961) 2025-05-06 09:00:01 -04:00
patientx
bfc4e29f20
Merge branch 'comfyanonymous:master' into master 2025-05-06 14:02:47 +03:00
comfyanonymous
094e9ef126
Add a way to disable api nodes: --disable-api-nodes (#7960) 2025-05-06 04:53:53 -04:00
Jedrzej Kosinski
1271c4ef9d
More API Nodes (#7956)
* Add Ideogram generate node.

* Add staging api.

* Add API_NODE and common error for missing auth token (#5)

* Add Minimax Video Generation + Async Task queue polling example (#6)

* [Minimax] Show video preview and embed workflow in ouput (#7)

* Remove uv.lock

* Remove polling operations.

* Revert "Remove polling operations."

* Update stubs.

* Added Ideogram and Minimax back in.

* Added initial BFL Flux 1.1 [pro] Ultra node (#11)

* Add --comfy-api-base launch arg (#13)

* Add instructions for staging development. (#14)

* remove validation to make it easier to run against LAN copies of the API

* Manually add BFL polling status response schema (#15)

* Add function for uploading files. (#18)

* Add Luma nodes (#16)

* Refactor util functions (#20)

* Add VIDEO type (#21)

* Add rest of Luma node functionality (#19)

* Fix image_luma_ref not working (#28)

* [Bug] Remove duplicated option T2V-01 in MinimaxTextToVideoNode (#31)

* Add utils to map from pydantic model fields to comfy node inputs (#30)

* add veo2, bump av req (#32)

* Add Recraft nodes (#29)

* Add Kling Nodes (#12)

* Add Camera Concepts (luma_concepts) to Luma Video nodes (#33)

* Add Runway nodes (#17)

* Convert Minimax node to use VIDEO output type (#34)

* Standard `CATEGORY` system for api nodes (#35)

* Set `Content-Type` header when uploading files (#36)

* add better error propagation to veo2 (#37)

* Add Realistic Image and Logo Raster styles for Recraft v3 (#38)

* Fix runway image upload and progress polling (#39)

* Fix image upload for Luma: only include `Content-Type` header field if it's set explicitly (#40)

* Moved Luma nodes to nodes_luma.py (#47)

* Moved Recraft nodes to nodes_recraft.py (#48)

* Add Pixverse nodes (#46)

* Move and fix BFL nodes to node_bfl.py (#49)

* Move and edit Minimax node to nodes_minimax.py (#50)

* Add Minimax Image to Video node + Cleanup (#51)

* Add Recraft Text to Vector node, add Save SVG node to handle its output (#53)

* Added pixverse_template support to Pixverse Text to Video node (#54)

* Added Recraft Controls + Recraft Color RGB nodes (#57)

* split remaining nodes out of nodes_api, make utility lib, refactor ideogram (#61)

* Add types and doctstrings to utils file (#64)

* Fix: `PollingOperation` progress bar update progress by absolute value (#65)

* Use common download function in kling nodes module (#67)

* Fix: Luma video nodes in `api nodes/image` category (#68)

* Set request type explicitly (#66)

* Add `control_after_generate` to all seed inputs (#69)

* Fix bug: deleting `Content-Type` when property does not exist (#73)

* Add preview to Save SVG node (#74)

* change default poll interval (#76), rework veo2

* Add Pixverse and updated Kling types (#75)

* Added Pixverse Image to VIdeo node (#77)

* Add Pixverse Transition Video node (#79)

* Proper ray-1-6 support as fix has been applied in backend (#80)

* Added Recraft Style - Infinite Style Library node (#82)

* add ideogram v3 (#83)

* [Kling] Split Camera Control config to its own node (#81)

* Add Pika i2v and t2v nodes (#52)

* Temporary Fix for Runway (#87)

* Added Stability Stable Image Ultra node (#86)

* Remove Runway nodes (#88)

* Fix: Prompt text can't be validated in Kling nodes when using primitive nodes (#90)

* Fix: typo in node name "Stabiliy" => "Stability" (#91)

* Add String (Multiline) node (#93)

* Update Pika Duration and Resolution options (#94)

* Change base branch to master. Not main. (#95)

* Fix UploadRequest file_name param (#98)

* Removed Infinite Style Library until later (#99)

* fix ideogram style types (#100)

* fix multi image return (#101)

* add metadata saving to SVG (#102)

* Bump templates version to include API node template workflows (#104)

* Fix: `download_url_to_video_output` return type (#103)

* fix 4o generation bug (#106)

* Serve SVG files directly (#107)

* Add a bunch of nodes, 3 ready to use, the rest waiting for endpoint support (#108)

* Revert "Serve SVG files directly" (#111)

* Expose 4 remaining Recraft nodes (#112)

* [Kling] Add `Duration` and `Video ID` outputs (#105)

* Fix: datamodel-codegen sets string#binary type to non-existent `bytes_aliased` variable  (#114)

* Fix: Dall-e 2 not setting request content-type dynamically (#113)

* Default request timeout: one hour. (#116)

* Add Kling nodes: camera control, start-end frame, lip-sync, video extend (#115)

* Add 8 nodes - 4 BFL, 4 Stability (#117)

* Fix error for Recraft ImageToImage error for nonexistent random_seed param (#118)

* Add remaining Pika nodes (#119)

* Make controls input work for Recraft Image to Image node (#120)

* Use upstream PR: Support saving Comfy VIDEO type to buffer (#123)

* Use Upstream PR: "Fix: Error creating video when sliced audio tensor chunks are non-c-contiguous" (#127)

* Improve audio upload utils (#128)

* Fix: Nested `AnyUrl` in request model cannot be serialized (Kling, Runway) (#129)

* Show errors and API output URLs to the user (change log levels) (#131)

* Fix: Luma I2I fails when weight is <=0.01 (#132)

* Change category of `LumaConcepts` node from image to video (#133)

* Fix: `image.shape` accessed before `image` is null-checked (#134)

* Apply small fixes and most prompt validation (if needed to avoid API error) (#135)

* Node name/category modifications (#140)

* Add back Recraft Style - Infinite Style Library node (#141)

* Fixed Kling: Check attributes of pydantic types. (#144)

* Bump `comfyui-workflow-templates` version (#142)

* [Kling] Print response data when error validating response (#146)

* Fix: error validating Kling image response, trying to use `"key" in` on Pydantic class instance (#147)

* [Kling] Fix: Correct/verify supported subset of input combos in Kling nodes (#149)

* [Kling] Fix typo in node description (#150)

* [Kling] Fix: CFG min/max not being enforced (#151)

* Rebase launch-rebase (private) on prep-branch (public copy of master) (#153)

* Bump templates version (#154)

* Fix: Kling image gen nodes don't return entire batch when `n` > 1 (#152)

* Remove pixverse_template from PixVerse Transition Video node (#155)

* Invert image_weight value on Luma Image to Image node (#156)

* Invert and resize mask for Ideogram V3 node to match masking conventions (#158)

* [Kling] Fix: image generation nodes not returning Tuple (#159)

* [Bug] [Kling] Fix Kling camera control (#161)

* Kling Image Gen v2 + improve node descriptions for Flux/OpenAI (#160)

* [Kling] Don't return video_id from dual effect video (#162)

* Bump frontend to 1.18.8 (#163)

* Use 3.9 compat syntax (#164)

* Use Python 3.10

* add example env var

* Update templates to 0.1.11

* Bump frontend to 1.18.9

---------

Co-authored-by: Robin Huang <robin.j.huang@gmail.com>
Co-authored-by: Christian Byrne <cbyrne@comfy.org>
Co-authored-by: thot experiment <94414189+thot-experiment@users.noreply.github.com>
2025-05-06 04:23:00 -04:00
patientx
080b5d0df4
Update rmsnorm.py (Ensuring eps is not None) 2025-05-05 01:46:04 +03:00
patientx
634c398fd5
Merge branch 'comfyanonymous:master' into master 2025-05-04 16:14:44 +03:00
comfyanonymous
80a44b97f5
Change lumina to native RMSNorm. (#7935) 2025-05-04 06:39:23 -04:00
comfyanonymous
9187a09483
Change cosmos and hydit models to use the native RMSNorm. (#7934) 2025-05-04 06:26:20 -04:00
patientx
be4b829886
Merge branch 'comfyanonymous:master' into master 2025-05-04 03:46:17 +03:00
comfyanonymous
3041e5c354
Switch mochi and wan modes to use pytorch RMSNorm. (#7925)
* Switch genmo model to native RMSNorm.

* Switch WAN to native RMSNorm.
2025-05-03 19:07:55 -04:00
patientx
f98aad15d5
Merge branch 'comfyanonymous:master' into master 2025-05-02 23:58:33 +03:00
patientx
1068783ff8
Update zluda.py 2025-05-02 20:21:10 +03:00
patientx
3b827e0a59
Update zluda.py 2025-05-02 20:20:41 +03:00
Kohaku-Blueleaf
2ab9618732
Fix the bugs in OFT/BOFT moule (#7909)
* Correct calculate_weight and load for OFT

* Correct calculate_weight and loading for BOFT
2025-05-02 13:12:37 -04:00
patientx
bc1fa6e013
Create zluda.py (custom zluda for miopen-triton) 2025-05-02 17:44:26 +03:00
patientx
073ff4a11d
Create placeholder 2025-05-01 23:05:14 +03:00
patientx
aad001bfbe
Add files via upload 2025-05-01 23:04:46 +03:00
patientx
caa3597bb7
Create placeholder 2025-05-01 23:03:51 +03:00
patientx
085925ddf7
Update zluda.py 2025-05-01 20:49:28 +03:00
patientx
da173f67b3
Merge branch 'comfyanonymous:master' into master 2025-05-01 17:23:12 +03:00
comfyanonymous
aa9d759df3
Switch ltxv to use the pytorch RMSNorm. (#7897) 2025-05-01 06:33:42 -04:00
comfyanonymous
08ff5fa08a Cleanup chroma PR. 2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG (#7355)
* Upload files for Chroma Implementation

* Remove trailing whitespace

* trim more trailing whitespace..oops

* remove unused imports

* Add supported_inference_dtypes

* Set min_length to 0 and remove attention_mask=True

* Set min_length to 1

* get_mdulations added from blepping and minor changes

* Add lora conversion if statement in lora.py

* Update supported_models.py

* update model_base.py

* add uptream commits

* set modelType.FLOW, will cause beta scheduler to work properly

* Adjust memory usage factor and remove unnecessary code

* fix mistake

* reduce code duplication

* remove unused imports

* refactor for upstream sync

* sync chroma-support with upstream via syncbranch patch

* Update sd.py

* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master 2025-04-30 13:46:06 +03:00
comfyanonymous
dbc726f80c
Better vace memory estimation. (#7875) 2025-04-29 20:42:00 -04:00
comfyanonymous
0a66d4b0af
Per device stream counters for async offload. (#7873) 2025-04-29 20:28:52 -04:00
patientx
64709ce55c
Merge branch 'comfyanonymous:master' into master 2025-04-29 14:18:45 +03:00
guill
68f0d35296
Add support for VIDEO as a built-in type (#7844)
* Add basic support for videos as types

This PR adds support for VIDEO as first-class types. In order to avoid
unnecessary costs, VIDEO outputs must implement the `VideoInput` ABC,
but their implementation details can vary. Included are two
implementations of this type which can be returned by other nodes:

* `VideoFromFile` - Created with either a path on disk (as a string) or
  a `io.BytesIO` containing the contents of a file in a supported format
  (like .mp4). This implementation won't actually load the video unless
  necessary. It will also avoid re-encoding when saving if possible.
* `VideoFromComponents` - Created from an image tensor and an optional
  audio tensor.

Currently, only h264 encoded videos in .mp4 containers are supported for
saving, but the plan is to add additional encodings/containers in the
near future (particularly .webm).

* Add optimization to avoid parsing entire video

* Improve type declarations to reduce warnings

* Make sure bytesIO objects can be read many times

* Fix a potential issue when saving long videos

* Fix incorrect type annotation

* Add a `LoadVideo` node to make testing easier

* Refactor new types out of the base comfy folder

I've created a new `comfy_api` top-level module. The intention is that
anything within this folder would be covered by semver-style versioning
that would allow custom nodes to rely on them not introducing breaking
changes.

* Fix linting issue
2025-04-29 05:58:00 -04:00
patientx
0aeb958ea5
Merge branch 'comfyanonymous:master' into master 2025-04-29 01:49:37 +03:00
comfyanonymous
83d04717b6
Support HiDream E1 model. (#7857) 2025-04-28 15:01:15 -04:00
chaObserv
c15909bb62
CFG++ for gradient estimation sampler (#7809) 2025-04-28 13:51:35 -04:00
comfyanonymous
5a50c3c7e5
Fix stream priority to support older pytorch. (#7856) 2025-04-28 13:07:21 -04:00
Pam
30159a7fe6
Save v pred zsnr metadata (#7840) 2025-04-28 13:03:21 -04:00
patientx
6244dfa1e1
Merge branch 'comfyanonymous:master' into master 2025-04-28 01:13:53 +03:00
comfyanonymous
c8cd7ad795
Use stream for casting if enabled. (#7833) 2025-04-27 05:38:11 -04:00
patientx
ec34f4da57
Merge branch 'comfyanonymous:master' into master 2025-04-27 05:01:55 +03:00
comfyanonymous
ac10a0d69e
Make loras work with --async-offload (#7824) 2025-04-26 19:56:22 -04:00
patientx
9cc8e2e1d0
Merge branch 'comfyanonymous:master' into master 2025-04-26 23:32:14 +03:00
comfyanonymous
0dcc75ca54
Add experimental --async-offload lowvram weight offloading. (#7820)
This should speed up the lowvram mode a bit. It currently is only enabled when --async-offload is used but it will be enabled by default in the future if there are no problems.
2025-04-26 16:11:21 -04:00
patientx
d6238ed3f0
Merge branch 'comfyanonymous:master' into master 2025-04-26 15:31:39 +03:00
comfyanonymous
23e39f2ba7
Add a T5TokenizerOptions node to set options for the T5 tokenizer. (#7803) 2025-04-25 19:36:00 -04:00
patientx
57a5e6e7ae
Merge branch 'comfyanonymous:master' into master 2025-04-26 01:23:01 +03:00
AustinMroz
78992c4b25
[NodeDef] Add documentation on widgetType (#7768)
* [NodeDef] Add documentation on widgetType

* Document required version for widgetType
2025-04-25 13:35:07 -04:00
patientx
224f72f90f
Merge branch 'comfyanonymous:master' into master 2025-04-25 14:11:50 +03:00
comfyanonymous
f935d42d8e Support SimpleTuner lycoris lora format for HiDream. 2025-04-25 03:11:14 -04:00
patientx
1d9338b4b9
Merge branch 'comfyanonymous:master' into master 2025-04-24 14:50:59 +03:00
thot experiment
e2eed9eb9b
throw away alpha channel in clip vision preprocessor (#7769)
saves users having to explicitly discard the channel
2025-04-23 21:28:36 -04:00
patientx
d8a75b86e9
Update zluda.py 2025-04-24 03:11:46 +03:00
patientx
91d7a9d234
Merge branch 'comfyanonymous:master' into master 2025-04-23 12:41:27 +03:00
comfyanonymous
552615235d
Fix for dino lowvram. (#7748) 2025-04-23 04:12:52 -04:00
Robin Huang
0738e4ea5d
[API nodes] Add backbone for supporting api nodes in ComfyUI (#7745)
* Add Ideogram generate node.

* Add staging api.

* COMFY_API_NODE_NAME node property

* switch to boolean flag and use original node name for id

* add optional to type

* Add API_NODE and common error for missing auth token (#5)

* Add Minimax Video Generation + Async Task queue polling example (#6)

* [Minimax] Show video preview and embed workflow in ouput (#7)

* [API Nodes] Send empty request body instead of empty dictionary. (#8)

* Fixed: removed function from rebase.

* Add pydantic.

* Remove uv.lock

* Remove polling operations.

* Update stubs workflow.

* Remove polling comments.

* Update stubs.

* Use pydantic v2.

* Use pydantic v2.

* Add basic OpenAITextToImage node

* Add.

* convert image to tensor.

* Improve types.

* Ruff.

* Push tests.

* Handle multi-form data.

- Don't set content-type for multi-part/form
- Use data field instead of JSON

* Change to api.comfy.org

* Handle error code 409.

* Remove nodes.

---------

Co-authored-by: bymyself <cbyrne@comfy.org>
Co-authored-by: Yoland Y <4950057+yoland68@users.noreply.github.com>
2025-04-23 02:18:08 -04:00