ComfyUI/models
comfyanonymous 943b3b615d
HunyuanVideo 1.5 (#10819)
* init

* update

* Update model.py

* Update model.py

* remove print

* Fix text encoding

* Prevent empty negative prompt

Really doesn't work otherwise

* fp16 works

* I2V

* Update model_base.py

* Update nodes_hunyuan.py

* Better latent rgb factors

* Use the correct sigclip output...

* Support HunyuanVideo1.5 SR model

* whitespaces...

* Proper latent channel count

* SR model fixes

This also still needs timesteps scheduling based on the noise scale, can be used with two samplers too already

* vae_refiner: roll the convolution through temporal

Work in progress.

Roll the convolution through time using 2-latent-frame chunks and a
FIFO queue for the convolution seams.

* Support HunyuanVideo15 latent resampler

* fix

* Some cleanup

Co-Authored-By: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>

* Proper hyvid15 I2V channels

Co-Authored-By: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>

* Fix TokenRefiner for fp16

Otherwise x.sum has infs, just in case only casting if input is fp16, I don't know if necessary.

* Bugfix for the HunyuanVideo15 SR model

* vae_refiner: roll the convolution through temporal II

Roll the convolution through time using 2-latent-frame chunks and a
FIFO queue for the convolution seams.

Added support for encoder, lowered to 1 latent frame to save more
VRAM, made work for Hunyuan Image 3.0 (as code shared).

Fixed names, cleaned up code.

* Allow any number of input frames in VAE.

* Better VAE encode mem estimation.

* Lowvram fix.

* Fix hunyuan image 2.1 refiner.

* Fix mistake.

* Name changes.

* Rename.

* Whitespace.

* Fix.

* Fix.

---------

Co-authored-by: kijai <40791699+kijai@users.noreply.github.com>
Co-authored-by: Rattus <rattus128@gmail.com>
2025-11-20 22:44:43 -05:00
..
audio_encoders Add models/audio_encoders directory. (#9548) 2025-08-25 20:13:54 -04:00
checkpoints Initial commit. 2023-01-16 22:37:14 -05:00
clip Add a CLIPLoader node to load standalone clip weights. 2023-02-05 15:20:18 -05:00
clip_vision Implement support for t2i style model. 2023-03-05 18:39:25 -05:00
configs Add the config for the SD1.x inpainting model. 2023-02-19 14:58:00 -05:00
controlnet Merge T2IAdapterLoader and ControlNetLoader. 2023-03-17 18:17:59 -04:00
diffusers diffusers loader 2023-04-05 23:57:31 -07:00
diffusion_models unet -> diffusion_models. 2024-08-17 21:31:04 -04:00
embeddings Add support for textual inversion embedding for SD1.x CLIP. 2023-01-29 18:46:44 -05:00
gligen Add support for GLIGEN textbox model. 2023-04-19 11:06:32 -04:00
hypernetworks Implement Linear hypernetworks. 2023-04-23 12:35:25 -04:00
latent_upscale_models HunyuanVideo 1.5 (#10819) 2025-11-20 22:44:43 -05:00
loras Add a LoraLoader node to apply loras to models and clip. 2023-02-03 02:46:24 -05:00
model_patches Support for Qwen Diffsynth Controlnets canny and depth. (#9465) 2025-08-20 22:26:37 -04:00
photomaker Add experimental photomaker nodes. 2024-01-24 09:51:42 -05:00
style_models Implement support for t2i style model. 2023-03-05 18:39:25 -05:00
text_encoders Update folder paths: "clip" -> "text_encoders" 2024-11-02 15:35:38 -04:00
unet Support loading unet files in diffusers format. 2023-07-05 17:38:59 -04:00
upscale_models Take some code from chainner to implement ESRGAN and other upscale models. 2023-03-11 13:09:28 -05:00
vae Initial commit. 2023-01-16 22:37:14 -05:00
vae_approx Refactor previews into one command line argument. 2023-06-06 02:13:05 -04:00