comfyanonymous
2865f913f7
Free memory before doing tiled decode.
2024-11-07 04:01:24 -05:00
comfyanonymous
b49616f951
Make VAEDecodeTiled node work with video VAEs.
2024-11-07 03:47:12 -05:00
comfyanonymous
8afb97cd3f
Fix unknown VAE being detected as the mochi VAE.
2024-11-05 03:43:27 -05:00
doctorpangloss
772e768fe8
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-11-04 10:17:26 -08:00
comfyanonymous
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
doctorpangloss
76a80a65ea
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-29 15:35:39 -07:00
comfyanonymous
c320801187
Remove useless line.
2024-10-28 17:41:12 -04:00
comfyanonymous
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
comfyanonymous
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
comfyanonymous
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
comfyanonymous
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
doctorpangloss
8512f361fe
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-10-14 15:26:27 -07:00
comfyanonymous
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
comfyanonymous
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
Dr.Lt.Data
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
2024-10-09 23:34:34 -04:00
comfyanonymous
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
comfyanonymous
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
comfyanonymous
abcd006b8c
Allow more permutations of clip/t5 in dual clip loader.
2024-10-03 09:26:11 -04:00
comfyanonymous
d985d1d7dc
CLIP Loader node now supports clip_l and clip_g only for SD3.
2024-10-02 04:25:17 -04:00
comfyanonymous
d1cdf51e1b
Refactor some of the TE detection code.
2024-10-01 07:08:41 -04:00
doctorpangloss
fa3176f96f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-09-23 12:50:31 -07:00
comfyanonymous
ad66f7c7d8
Add model_options to load_controlnet function.
2024-09-19 08:23:35 -04:00
comfyanonymous
d514bb38ee
Add some option to model_options for the text encoder.
...
load_device, offload_device and the initial_device can now be set.
2024-09-17 03:49:54 -04:00
comfyanonymous
e813abbb2c
Long CLIP L support for SDXL, SD3 and Flux.
...
Use the *CLIPLoader nodes.
2024-09-15 07:59:38 -04:00
doctorpangloss
5155a3e248
Merge WIP
2024-08-25 18:52:29 -07:00
comfyanonymous
d1a6bd6845
Support loading long clipl model with the CLIP loader node.
2024-08-20 10:46:36 -04:00
comfyanonymous
9eee470244
New load_text_encoder_state_dicts function.
...
Now you can load text encoders straight from a list of state dicts.
2024-08-19 17:36:35 -04:00
comfyanonymous
4f7a3cb6fb
unet -> diffusion_models.
2024-08-17 21:31:04 -04:00
comfyanonymous
fca42836f2
Add model_options for text encoder.
2024-08-17 11:17:20 -04:00
doctorpangloss
0549f35e85
Merge commit '39fb74c5bd13a1dccf4d7293a2f7a755d9f43cbd' of github.com:comfyanonymous/ComfyUI
...
- Improvements to tests
- Fixes model management
- Fixes issues with language nodes
2024-08-13 20:08:56 -07:00
comfyanonymous
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
comfyanonymous
a562c17e8a
load_unet -> load_diffusion_model with a model_options argument.
2024-08-12 23:20:57 -04:00
comfyanonymous
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
comfyanonymous
9acfe4df41
Support loading directly to vram with CLIPLoader node.
2024-08-12 00:06:01 -04:00
comfyanonymous
9829b013ea
Fix mistake in last commit.
2024-08-12 00:00:17 -04:00
comfyanonymous
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
comfyanonymous
e9589d6d92
Add a way to set model dtype and ops from load_checkpoint_guess_config.
2024-08-11 08:50:34 -04:00
comfyanonymous
0d82a798a5
Remove the ckpt_path from load_state_dict_guess_config.
2024-08-11 08:37:35 -04:00
ljleb
925fff26fd
alternative to load_checkpoint_guess_config that accepts a loaded state dict ( #4249 )
...
* make alternative fn
* add back ckpt path as 2nd argument?
2024-08-11 08:36:52 -04:00
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
doctorpangloss
39c6335331
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-05 16:13:20 -07:00
comfyanonymous
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
doctorpangloss
0a1ae64b0b
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-08-01 16:19:11 -07:00
comfyanonymous
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
2024-08-01 13:30:51 -04:00
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
doctorpangloss
ce5fe01768
Improve performance and memory management of upscale models, improve messaging on models loaded and unloaded from the GPU
2024-07-30 17:05:53 -07:00
doctorpangloss
34522e0914
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-30 11:11:45 -07:00
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
2024-07-28 01:19:20 -04:00
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. ( #4102 )
...
* Let tokenizers return weights to be stored in the saved checkpoint.
* Basic hunyuan dit implementation.
* Fix some resolutions not working.
* Support hydit checkpoint save.
* Init with right dtype.
* Switch to optimized attention in pooler.
* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
comfyanonymous
f87810cd3e
Let tokenizers return weights to be stored in the saved checkpoint.
2024-07-25 10:52:09 -04:00
comfyanonymous
10c919f4c7
Make it possible to load tokenizer data from checkpoints.
2024-07-24 16:43:53 -04:00
doctorpangloss
cc99d89ac6
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-18 16:31:21 -07:00
doctorpangloss
72baecad87
Improve logging and tracing for validation errors
2024-07-16 12:26:30 -07:00
comfyanonymous
1305fb294c
Refactor: Move some code to the comfy/text_encoders folder.
2024-07-15 17:36:24 -04:00
doctorpangloss
3d1d833e6f
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-07-15 14:22:49 -07:00
comfyanonymous
a3dffc447a
Support AuraFlow Lora and loading model weights in diffusers format.
...
You can load model weights in diffusers format using the UNETLoader node.
2024-07-13 13:51:40 -04:00
comfyanonymous
9f291d75b3
AuraFlow model implementation.
2024-07-11 16:52:26 -04:00
comfyanonymous
f45157e3ac
Fix error message never being shown.
2024-07-11 11:46:51 -04:00
comfyanonymous
5e1fced639
Cleaner support for loading different diffusion model types.
2024-07-11 11:37:31 -04:00
comfyanonymous
ffe0bb0a33
Remove useless code.
2024-07-10 20:33:12 -04:00
comfyanonymous
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
comfyanonymous
4040491149
Better T5xxl detection.
2024-07-06 00:53:33 -04:00
doctorpangloss
a13088ccec
Merge upstream
2024-07-04 11:58:55 -07:00
comfyanonymous
d7484ef30c
Support loading checkpoints with the UNETLoader node.
2024-07-03 11:34:32 -04:00
comfyanonymous
537f35c7bc
Don't update dict if contiguous.
2024-07-02 20:21:51 -04:00
Alex "mcmonkey" Goodwin
3f46362d22
fix non-contiguous tensor saving (from channels-last) ( #3932 )
2024-07-02 20:16:33 -04:00
comfyanonymous
8ceb5a02a3
Support saving stable audio checkpoint that can be loaded back.
2024-06-27 11:06:52 -04:00
comfyanonymous
4ef1479dcd
Multi dimension tiled scale function and tiled VAE audio encoding fallback.
2024-06-22 11:57:49 -04:00
comfyanonymous
1e2839f4d9
More proper tiled audio decoding.
2024-06-20 16:50:31 -04:00
comfyanonymous
0d6a57938e
Support loading diffusers SD3 model format with UNETLoader node.
2024-06-19 22:21:18 -04:00
comfyanonymous
a45df69570
Basic tiled decoding for audio VAE.
2024-06-17 22:48:23 -04:00
doctorpangloss
8cdc246450
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-17 16:19:48 -07:00
comfyanonymous
6425252c4f
Use fp16 as the default vae dtype for the audio VAE.
2024-06-16 13:12:54 -04:00
comfyanonymous
ca9d300a80
Better estimation for memory usage during audio VAE encoding/decoding.
2024-06-16 11:47:32 -04:00
comfyanonymous
746a0410d4
Fix VAEEncode with taesd3.
2024-06-16 03:10:04 -04:00
comfyanonymous
04e8798c37
Improvements to the TAESD3 implementation.
2024-06-16 02:04:24 -04:00
Dr.Lt.Data
df7db0e027
support TAESD3 ( #3738 )
2024-06-16 02:03:53 -04:00
comfyanonymous
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
doctorpangloss
cac6690481
Add known SD3 model files, merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-12 10:56:41 -07:00
comfyanonymous
69c8d6d8a6
Single and dual clip loader nodes support SD3.
...
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
2024-06-11 23:27:39 -04:00
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
2024-06-11 17:03:26 -04:00
comfyanonymous
5889b7ca0a
Support multiple text encoder configurations on SD3.
2024-06-11 13:14:43 -04:00
doctorpangloss
a5d828be77
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-06-10 13:21:36 -07:00
comfyanonymous
8c4a9befa7
SD3 Support.
2024-06-10 14:06:23 -04:00
comfyanonymous
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
doctorpangloss
3d98440fb7
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-16 14:28:49 -07:00
doctorpangloss
5a9055fe05
Tokenizers are now shallow cloned when CLIP is cloned. This allows nodes to add vocab to the tokenizer, as some checkpoints and LoRAs may require.
2024-05-16 12:39:19 -07:00
doctorpangloss
8741cb3ce8
LLM support in ComfyUI
...
- Currently uses `transformers`
- Supports model management and correctly loading and unloading models
based on what your machine can support
- Includes a Text Diffusers 2 workflow to demonstrate text rendering in
SD1.5
2024-05-14 17:30:23 -07:00
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
doctorpangloss
c2fa74f625
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-09 13:45:38 -07:00
comfyanonymous
93e876a3be
Remove warnings that confuse people.
2024-05-09 05:29:42 -04:00
doctorpangloss
3a64e04a93
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-05-07 13:57:53 -07:00
comfyanonymous
c61eadf69a
Make the load checkpoint with config function call the regular one.
...
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.
The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
2024-05-06 20:04:39 -04:00
doctorpangloss
f965fb2bc0
Merge upstream
2024-04-24 22:41:43 -07:00
comfyanonymous
8dc19e40d1
Don't init a VAE model when there are no VAE weights.
2024-04-24 09:20:31 -04:00
comfyanonymous
c59fe9f254
Support VAE without quant_conv.
2024-04-18 21:05:33 -04:00
doctorpangloss
034ffcea03
Merge branch 'master' of github.com:comfyanonymous/ComfyUI
2024-04-08 10:02:37 -07:00
comfyanonymous
30abc324c2
Support properly saving CosXL checkpoints.
2024-04-08 00:36:22 -04:00