kijai
|
dafa2695d4
|
Code cleanup, don't force the fp32 layers as it has minimal effect
|
2025-12-03 16:08:47 +02:00 |
|
kijai
|
b24bac604a
|
Update chunking
|
2025-12-03 16:08:47 +02:00 |
|
kijai
|
a3ce1e02d7
|
Further reduce peak VRAM consumption by chunking ffn
|
2025-12-03 16:08:47 +02:00 |
|
kijai
|
1baf5ec4af
|
Reduce peak VRAM usage a bit
|
2025-12-03 16:08:47 +02:00 |
|
kijai
|
f1a5f6f5b3
|
Rever RoPE scaling to simpler one
|
2025-12-03 16:08:47 +02:00 |
|
kijai
|
68c712ef7e
|
Don't scale RoPE for lite model as that just doesn't work...
|
2025-12-03 16:08:46 +02:00 |
|
kijai
|
dd318ada2f
|
Support block replace patches (SLG mostly)
|
2025-12-03 16:08:46 +02:00 |
|
kijai
|
292c3576c2
|
Fix I2V, add necessary latent post process nodes
|
2025-12-03 16:08:46 +02:00 |
|
kijai
|
8f02217f85
|
Code cleanup, optimizations, use fp32 for all layers originally at fp32
|
2025-12-03 16:08:46 +02:00 |
|
kijai
|
0920cdcf63
|
Add transformer_options for attention
|
2025-12-03 16:08:46 +02:00 |
|
kijai
|
5f1346ccd1
|
Fix fp8
|
2025-12-03 16:08:46 +02:00 |
|
kijai
|
530df3d710
|
Add Kandinsky5 model support
lite and pro T2V tested to work
|
2025-12-03 16:08:46 +02:00 |
|