comfyanonymous
a96e65df18
Disable omnigen2 fp16 on older pytorch versions. ( #8672 )
2025-06-26 03:39:09 -04:00
patientx
7db0737e86
Merge branch 'comfyanonymous:master' into master
2025-06-26 02:41:07 +03:00
comfyanonymous
ec70ed6aea
Omnigen2 model implementation. ( #8669 )
2025-06-25 19:35:57 -04:00
patientx
e3720cd495
Merge branch 'comfyanonymous:master' into master
2025-06-13 15:04:05 +03:00
comfyanonymous
c69af655aa
Uncap cosmos predict2 res and fix mem estimation. ( #8518 )
2025-06-13 07:30:18 -04:00
comfyanonymous
251f54a2ad
Basic initial support for cosmos predict2 text to image 2B and 14B models. ( #8517 )
2025-06-13 07:05:23 -04:00
patientx
de4e3dd19a
Merge branch 'comfyanonymous:master' into master
2025-05-16 02:43:05 +03:00
comfyanonymous
1c2d45d2b5
Fix typo in last PR. ( #8144 )
...
More robust model detection for future proofing.
2025-05-15 19:02:19 -04:00
George0726
c820ef950d
Add Wan-FUN Camera Control models and Add WanCameraImageToVideo node ( #8013 )
...
* support wan camera models
* fix by ruff check
* change camera_condition type; make camera_condition optional
* support camera trajectory nodes
* fix camera direction
---------
Co-authored-by: Qirui Sun <sunqr0667@126.com>
2025-05-15 19:00:43 -04:00
patientx
326e041c11
Merge branch 'comfyanonymous:master' into master
2025-05-07 17:44:36 +03:00
comfyanonymous
16417b40d9
Initial ACE-Step model implementation. ( #7972 )
2025-05-07 08:33:34 -04:00
patientx
5edeb23260
Merge branch 'comfyanonymous:master' into master
2025-05-06 19:08:07 +03:00
comfyanonymous
271c9c5b9e
Better mem estimation for the LTXV 13B model. ( #7963 )
2025-05-06 09:52:37 -04:00
patientx
da173f67b3
Merge branch 'comfyanonymous:master' into master
2025-05-01 17:23:12 +03:00
comfyanonymous
08ff5fa08a
Cleanup chroma PR.
2025-04-30 20:57:30 -04:00
Silver
4ca3d84277
Support for Chroma - Flux1 Schnell distilled with CFG ( #7355 )
...
* Upload files for Chroma Implementation
* Remove trailing whitespace
* trim more trailing whitespace..oops
* remove unused imports
* Add supported_inference_dtypes
* Set min_length to 0 and remove attention_mask=True
* Set min_length to 1
* get_mdulations added from blepping and minor changes
* Add lora conversion if statement in lora.py
* Update supported_models.py
* update model_base.py
* add uptream commits
* set modelType.FLOW, will cause beta scheduler to work properly
* Adjust memory usage factor and remove unnecessary code
* fix mistake
* reduce code duplication
* remove unused imports
* refactor for upstream sync
* sync chroma-support with upstream via syncbranch patch
* Update sd.py
* Add Chroma as option for the OptimalStepsScheduler node
2025-04-30 20:57:00 -04:00
patientx
f49d26848b
Merge branch 'comfyanonymous:master' into master
2025-04-30 13:46:06 +03:00
comfyanonymous
dbc726f80c
Better vace memory estimation. ( #7875 )
2025-04-29 20:42:00 -04:00
patientx
32ec658779
Merge branch 'comfyanonymous:master' into master
2025-04-21 23:16:54 +03:00
comfyanonymous
ce22f687cc
Support for WAN VACE preview model. ( #7711 )
...
* Support for WAN VACE preview model.
* Remove print.
2025-04-21 14:40:29 -04:00
patientx
ad2fa1a675
Merge branch 'comfyanonymous:master' into master
2025-04-16 00:50:26 +03:00
comfyanonymous
9ad792f927
Basic support for hidream i1 model.
2025-04-15 17:35:05 -04:00
patientx
39cf3cdc32
Merge branch 'comfyanonymous:master' into master
2025-03-27 12:35:23 +03:00
comfyanonymous
3661c833bc
Support the WAN 2.1 fun control models.
...
Use the new WanFunControlToVideo node.
2025-03-26 19:54:54 -04:00
patientx
a6db9cc07a
Merge branch 'comfyanonymous:master' into master
2025-03-22 13:52:43 +03:00
thot experiment
83e839a89b
Native LotusD Implementation ( #7125 )
...
* draft pass at a native comfy implementation of Lotus-D depth and normal est
* fix model_sampling kludges
* fix ruff
---------
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
2025-03-21 14:04:15 -04:00
patientx
28a4de830c
Merge branch 'comfyanonymous:master' into master
2025-03-20 14:23:30 +03:00
comfyanonymous
3872b43d4b
A few fixes for the hunyuan3d models.
2025-03-20 04:52:31 -04:00
comfyanonymous
11f1b41bab
Initial Hunyuan3Dv2 implementation.
...
Supports the multiview, mini, turbo models and VAEs.
2025-03-19 16:52:58 -04:00
patientx
8847252eec
Merge branch 'comfyanonymous:master' into master
2025-03-07 16:14:21 +03:00
comfyanonymous
11b1f27cb1
Set WAN default compute dtype to fp16.
2025-03-07 04:52:36 -05:00
patientx
44c060b3de
Merge branch 'comfyanonymous:master' into master
2025-03-06 12:16:31 +03:00
comfyanonymous
29a70ca101
Support HunyuanVideo image to video model.
2025-03-06 03:07:15 -05:00
patientx
c36a942c12
Merge branch 'comfyanonymous:master' into master
2025-03-05 19:11:25 +03:00
comfyanonymous
9c9a7f012a
Adjust ltxv memory factor.
2025-03-05 05:16:05 -05:00
patientx
cacb8da101
Merge branch 'comfyanonymous:master' into master
2025-03-04 13:17:37 +03:00
comfyanonymous
7c7c70c400
Refactor skyreels i2v code.
2025-03-04 00:15:45 -05:00
patientx
debf69185c
Merge branch 'comfyanonymous:master' into master
2025-02-26 16:00:33 +03:00
comfyanonymous
b6fefe686b
Better wan memory estimation.
2025-02-26 07:51:22 -05:00
patientx
743996a1f7
Merge branch 'comfyanonymous:master' into master
2025-02-26 12:56:06 +03:00
comfyanonymous
4ced06b879
WIP support for Wan I2V model.
2025-02-26 01:49:43 -05:00
patientx
1e91ff59a1
Merge branch 'comfyanonymous:master' into master
2025-02-26 09:24:15 +03:00
comfyanonymous
cb06e9669b
Wan seems to work with fp16.
2025-02-25 21:37:12 -05:00
patientx
879db7bdfc
Merge branch 'comfyanonymous:master' into master
2025-02-26 02:07:25 +03:00
comfyanonymous
63023011b9
WIP support for Wan t2v model.
2025-02-25 17:20:35 -05:00
patientx
93c0fc3446
Update supported_models.py
2025-02-05 23:12:38 +03:00
patientx
f8c2ab631a
Merge branch 'comfyanonymous:master' into master
2025-02-05 23:10:21 +03:00
comfyanonymous
37cd448529
Set the shift for Lumina back to 6.
2025-02-05 14:49:52 -05:00
patientx
523b5352b8
Merge branch 'comfyanonymous:master' into master
2025-02-04 18:16:21 +03:00
comfyanonymous
8ac2dddeed
Lower the default shift of lumina to reduce artifacts.
2025-02-04 06:50:37 -05:00
patientx
b8ab0f2091
Merge branch 'comfyanonymous:master' into master
2025-02-04 12:32:22 +03:00
comfyanonymous
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
patientx
4afa79a368
Merge branch 'comfyanonymous:master' into master
2025-01-16 17:23:00 +03:00
comfyanonymous
88ceb28e20
Tweak hunyuan memory usage factor.
2025-01-16 06:31:03 -05:00
patientx
ed13b68e4f
Merge branch 'comfyanonymous:master' into master
2025-01-16 13:59:32 +03:00
comfyanonymous
9d8b6c1f46
More accurate memory estimation for cosmos and hunyuan video.
2025-01-16 03:48:40 -05:00
patientx
c7ebd121d6
Merge branch 'comfyanonymous:master' into master
2025-01-14 15:50:05 +03:00
comfyanonymous
3aaabb12d4
Implement Cosmos Image/Video to World (Video) diffusion models.
...
Use CosmosImageToVideoLatent to set the input image/video.
2025-01-14 05:14:10 -05:00
patientx
7d45042e2e
Merge branch 'comfyanonymous:master' into master
2025-01-10 18:31:48 +03:00
comfyanonymous
2ff3104f70
WIP support for Nvidia Cosmos 7B and 14B text to world (video) models.
2025-01-10 09:14:16 -05:00
patientx
b2a9683d75
Merge branch 'comfyanonymous:master' into master
2024-12-27 15:38:26 +03:00
comfyanonymous
4b5bcd8ac4
Closer memory estimation for hunyuan dit model.
2024-12-27 07:37:00 -05:00
comfyanonymous
ceb50b2cbf
Closer memory estimation for pixart models.
2024-12-27 07:30:09 -05:00
patientx
757335d901
Update supported_models.py
2024-12-23 02:54:49 +03:00
patientx
713eca2176
Update supported_models.py
2024-12-23 02:32:50 +03:00
patientx
37fc9a3ff2
Merge branch 'comfyanonymous:master' into master
2024-12-21 00:37:40 +03:00
City
bddb02660c
Add PixArt model support ( #6055 )
...
* PixArt initial version
* PixArt Diffusers convert logic
* pos_emb and interpolation logic
* Reduce duplicate code
* Formatting
* Use optimized attention
* Edit empty token logic
* Basic PixArt LoRA support
* Fix aspect ratio logic
* PixArtAlpha text encode with conds
* Use same detection key logic for PixArt diffusers
2024-12-20 15:25:00 -05:00
patientx
dc574cdc47
Merge branch 'comfyanonymous:master' into master
2024-12-17 13:57:30 +03:00
comfyanonymous
d6656b0c0c
Support llama hunyuan video text encoder in scaled fp8 format.
2024-12-17 04:19:22 -05:00
comfyanonymous
39b1fc4ccc
Adjust used dtypes for hunyuan video VAE and diffusion model.
2024-12-16 23:31:10 -05:00
comfyanonymous
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
patientx
3218ed8559
Merge branch 'comfyanonymous:master' into master
2024-12-13 11:21:47 +03:00
Chenlei Hu
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
patientx
9fe09c02c2
Merge branch 'comfyanonymous:master' into master
2024-11-26 11:50:33 +03:00
comfyanonymous
b7143b74ce
Flux inpaint model does not work in fp16.
2024-11-26 01:33:01 -05:00
patientx
396d7cd9d0
Merge branch 'comfyanonymous:master' into master
2024-11-22 18:14:14 +03:00
comfyanonymous
5e16f1d24b
Support Lightricks LTX-Video model.
2024-11-22 08:46:39 -05:00
patientx
96bccdce39
Merge branch 'comfyanonymous:master' into master
2024-11-11 13:42:59 +03:00
comfyanonymous
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
patientx
7b1f4c1094
Merge branch 'comfyanonymous:master' into master
2024-10-28 10:57:25 +03:00
comfyanonymous
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
patientx
b712d0a718
Merge branch 'comfyanonymous:master' into master
2024-10-27 13:14:51 +03:00
comfyanonymous
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
patientx
0575f60ac4
Merge branch 'comfyanonymous:master' into master
2024-10-26 15:10:27 +03:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
patientx
9fd46200ab
Merge branch 'comfyanonymous:master' into master
2024-10-21 12:23:49 +03:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
patientx
524cd140b5
removed bfloat from flux model support, resulting in 2x speedup
2024-08-30 13:33:32 +03:00
comfyanonymous
8f60d093ba
Fix issue.
2024-08-22 10:38:24 -04:00
comfyanonymous
0f9c2a7822
Try to fix SDXL OOM issue on some configurations.
2024-08-14 23:08:54 -04:00
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
2024-08-07 15:08:39 -04:00
comfyanonymous
2d75df45e6
Flux tweak memory usage.
2024-08-05 21:58:28 -04:00
comfyanonymous
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
2024-08-03 12:39:33 -04:00
comfyanonymous
ea03c9dcd2
Better per model memory usage estimations.
2024-08-02 18:09:24 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
2024-07-28 01:19:20 -04:00
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. ( #4102 )
...
* Let tokenizers return weights to be stored in the saved checkpoint.
* Basic hunyuan dit implementation.
* Fix some resolutions not working.
* Support hydit checkpoint save.
* Init with right dtype.
* Switch to optimized attention in pooler.
* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
comfyanonymous
1305fb294c
Refactor: Move some code to the comfy/text_encoders folder.
2024-07-15 17:36:24 -04:00
comfyanonymous
8e012043a9
Add a ModelSamplingAuraFlow node to change the shift value.
...
Set the default AuraFlow shift value to 1.73 (sqrt(3)).
2024-07-11 17:57:36 -04:00
comfyanonymous
9f291d75b3
AuraFlow model implementation.
2024-07-11 16:52:26 -04:00