• Joined on 2024-08-30
wangbo synced commits to refs/pull/8330/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:44 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 18 commits »
wangbo synced commits to refs/pull/8297/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:44 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/8218/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:43 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/7743/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:42 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/8153/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:42 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 18 commits »
wangbo synced commits to refs/pull/7742/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:41 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/7741/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:40 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/7571/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:39 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/7701/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:39 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 20 commits »
wangbo synced commits to refs/pull/7376/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:38 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/7562/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:38 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/7308/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:37 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/7276/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:36 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/7199/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:36 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/7160/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:35 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/7075/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:34 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 12 commits »
wangbo synced commits to refs/pull/7044/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:33 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/6993/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:31 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/6829/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:30 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/6332/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:29 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »