• Joined on 2024-08-30
wangbo synced commits to refs/pull/2313/merge at wangbo/ComfyUI-Manager from mirror 2025-11-28 01:07:04 +08:00
4a97227feb Update description in custom-node-list.json
Compare 2 commits »
wangbo synced commits to refs/pull/2313/head at wangbo/ComfyUI-Manager from mirror 2025-11-28 01:07:04 +08:00
4a97227feb Update description in custom-node-list.json
wangbo synced commits to refs/pull/9113/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 19 commits »
wangbo synced commits to refs/pull/8950/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 14 commits »
wangbo synced commits to refs/pull/8289/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 5 commits »
wangbo synced commits to refs/pull/10884/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
3883bb29fd Relocate elif block and set Wan VAE dim directly without using pruning rate for lightvae
Compare 2 commits »
wangbo synced commits to refs/pull/8115/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 43 commits »
wangbo synced commits to refs/pull/9305/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
bd34b164b0 Merge 1d380a5458a74fabdc5a8556c1902299b4b8d0cb into f17251bec6
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 7 commits »
wangbo synced commits to refs/pull/10884/head at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
3883bb29fd Relocate elif block and set Wan VAE dim directly without using pruning rate for lightvae
wangbo synced commits to refs/pull/10772/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
e39b882c50 Merge branch 'comfyanonymous:master' into feature/custom-node-paths-cli-args
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 20 commits »
wangbo synced commits to refs/pull/10864/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
3f382a4f98 quant ops: Dequantize weight in-place (#10935)
Compare 2 commits »
wangbo synced commits to refs/pull/10854/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 19 commits »
wangbo synced commits to refs/pull/10853/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 4 commits »
wangbo synced commits to refs/pull/10845/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 5 commits »
wangbo synced commits to refs/pull/10798/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:07 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
Compare 3 commits »
wangbo synced commits to refs/pull/10639/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 39 commits »
wangbo synced commits to refs/pull/10772/head at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
e39b882c50 Merge branch 'comfyanonymous:master' into feature/custom-node-paths-cli-args
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 40 commits »
wangbo synced commits to refs/pull/10728/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
3f382a4f98 quant ops: Dequantize weight in-place (#10935)
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 40 commits »
wangbo synced commits to refs/pull/10716/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
cc6a8dcd1a Dataset Processing Nodes and Improved LoRA Trainer Nodes with multi resolution supports. (#10708)
Compare 14 commits »
wangbo synced commits to refs/pull/10698/merge at wangbo/ComfyUI from mirror 2025-11-28 00:47:06 +08:00
3f382a4f98 quant ops: Dequantize weight in-place (#10935)
f17251bec6 Account for the VRAM cost of weight offloading (#10733)
c38e7d6599 block info (#10841)
eaf68c9b5b Make lora training work on Z Image and remove some redundant nodes. (#10927)
Compare 35 commits »