• Joined on 2024-08-30
wangbo synced commits to refs/pull/11350/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11377/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11321/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/10197/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/10016/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/10024/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/10238/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/10050/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 9 commits »
wangbo synced commits to refs/pull/10728/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/10941/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/10258/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/10717/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced commits to refs/pull/10534/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/10605/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/10698/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 18 commits »
wangbo synced commits to refs/pull/10726/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/10403/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 13 commits »
wangbo synced commits to refs/pull/10347/merge at wangbo/ComfyUI from mirror 2026-01-25 05:40:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 14 commits »
wangbo synced new reference austin/node-convert-to-list to wangbo/ComfyUI from mirror 2026-01-25 05:40:15 +08:00
wangbo synced commits to pysssss/basic-glsl-shader-node at wangbo/ComfyUI from mirror 2026-01-25 05:40:15 +08:00
46a83e9630 Try fix build
5b0fb64d20 Support multiple outputs
521ca3b5d2 Merge branch 'pysssss/combo-hidden-index-output' into pysssss/basic-glsl-shader-node
53094efd1d add hidden index input from frontend and output it
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
Compare 6 commits »