• Joined on 2024-08-30
wangbo synced commits to refs/pull/2536/head at wangbo/ComfyUI-Manager from mirror 2026-01-25 13:30:23 +08:00
e91802dff4 chore: changed name of the nodepack
wangbo synced commits to refs/pull/2522/merge at wangbo/ComfyUI-Manager from mirror 2026-01-25 13:30:22 +08:00
d8e3f531c7 update DB
40829b059a update DB
6c48c98900 update DB
934132e922 Add Audio Enhance and Normalize tools (#2537)
Compare 14 commits »
wangbo synced commits to refs/pull/2495/merge at wangbo/ComfyUI-Manager from mirror 2026-01-25 13:30:21 +08:00
d8e3f531c7 update DB
40829b059a update DB
6c48c98900 update DB
934132e922 Add Audio Enhance and Normalize tools (#2537)
Compare 14 commits »
wangbo synced commits to refs/pull/2472/merge at wangbo/ComfyUI-Manager from mirror 2026-01-25 13:30:20 +08:00
d8e3f531c7 update DB
40829b059a update DB
6c48c98900 update DB
934132e922 Add Audio Enhance and Normalize tools (#2537)
Compare 14 commits »
wangbo synced commits to refs/pull/9848/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/9820/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/9478/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/9756/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/9974/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/9901/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/9803/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:46 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/9152/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/8985/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/9113/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
Compare 3 commits »
wangbo synced commits to refs/pull/9207/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/9291/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/8752/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/8950/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:45 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
Compare 3 commits »
wangbo synced commits to refs/pull/8297/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:44 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/8330/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:44 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 18 commits »