• Joined on 2024-08-30
wangbo synced commits to refs/pull/10884/head at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
3883bb29fd Relocate elif block and set Wan VAE dim directly without using pruning rate for lightvae
wangbo synced commits to refs/pull/10864/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
ba4d91da5b Merge 3322d21eaca527a00cd023a43a4a1a72d4c67355 into 3f382a4f98
3f382a4f98 quant ops: Dequantize weight in-place (#10935)
Compare 2 commits »
wangbo synced commits to refs/pull/10854/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 19 commits »
wangbo synced commits to refs/pull/10853/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 4 commits »
wangbo synced commits to refs/pull/10845/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 5 commits »
wangbo synced commits to refs/pull/10798/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
Compare 3 commits »
wangbo synced commits to refs/pull/10716/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 14 commits »
wangbo synced commits to refs/pull/10772/head at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
e39b882c50 Merge branch 'comfyanonymous:master' into feature/custom-node-paths-cli-args
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 40 commits »
wangbo synced commits to refs/pull/10728/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
3f382a4f98 quant ops: Dequantize weight in-place (#10935)
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 40 commits »
wangbo synced commits to refs/pull/10698/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
3f382a4f98 quant ops: Dequantize weight in-place (#10935)
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 35 commits »
wangbo synced commits to refs/pull/10669/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
Compare 3 commits »
wangbo synced commits to refs/pull/10639/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 39 commits »
wangbo synced commits to refs/pull/10626/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 43 commits »
wangbo synced commits to refs/pull/10605/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 23 commits »
wangbo synced commits to refs/pull/10534/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
Compare 3 commits »
wangbo synced commits to refs/pull/10347/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 43 commits »
wangbo synced commits to refs/pull/10238/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
1b6b99394a Merge cc337019ce5c4323b8d5b86fccc2a26c591be8c6 into f17251bec6
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 27 commits »
wangbo synced commits to refs/pull/10726/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 29 commits »
wangbo synced commits to refs/pull/10197/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 43 commits »
wangbo synced commits to refs/pull/10050/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 5 commits »