• Joined on 2024-08-30
wangbo synced commits to refs/pull/7186/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:18 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/8127/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:18 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/11724/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
5ddc0c1245 Merge branch 'master' into numpy_requirements
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
Compare 6 commits »
wangbo synced commits to refs/pull/6586/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11914/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
Compare 3 commits »
wangbo synced commits to refs/pull/6786/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/12043/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
c03a90ecfc Fix encode
Compare 2 commits »
wangbo synced commits to refs/pull/12043/head at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
c03a90ecfc Fix encode
wangbo synced commits to refs/pull/1622/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11724/head at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
5ddc0c1245 Merge branch 'master' into numpy_requirements
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 105 commits »
wangbo synced commits to refs/pull/5256/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:17 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 23 commits »
wangbo synced commits to refs/pull/11507/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11513/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11618/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11571/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11416/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to refs/pull/11553/merge at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
4e6a1b66a9 speed up and reduce VRAM of QWEN VAE and WAN (less so) (#12036)
9cf299a9f9 Make regular empty latent node work properly on flux 2 variants. (#12050)
e89b22993a Support ModelScope-Trainer/DiffSynth LoRA format for Flux.2 Klein models (#12042)
55bd606e92 LTX2: Refactor forward function for better VRAM efficiency and fix spatial inpainting (#12046)
Compare 5 commits »
wangbo synced commits to feat/cache-provider-api at wangbo/ComfyUI from mirror 2026-01-24 21:30:16 +08:00
dcf686857c fix: use hashable types in frozenset test and add dict test
17eed38750 fix: move _torch_available before usage and use importlib.util.find_spec
f4623c0e1b style: remove unused imports in test_cache_provider.py
5e4bbca1ad test: add unit tests for CacheProvider API
e17571d9be fix: use deterministic hash for cache keys instead of pickle
Compare 5 commits »
wangbo synced commits to feat/api-nodes/recraft-style at wangbo/ComfyUI from mirror 2026-01-24 21:30:15 +08:00
wangbo synced new reference feat/api-nodes/recraft-style to wangbo/ComfyUI from mirror 2026-01-24 21:30:15 +08:00