comfyanonymous
|
15b312de7a
|
Optimize nvfp4 lora applying. (#11854)
|
2026-01-13 19:23:58 -05:00 |
|
comfyanonymous
|
117e7a5853
|
Refactor to try to lower mem usage. (#11840)
|
2026-01-12 21:01:52 -08:00 |
|
comfyanonymous
|
b3c0e4de57
|
Make loras work on nvfp4 models. (#11837)
The initial applying is a bit slow but will probably be sped up in the
future.
|
2026-01-12 22:33:54 -05:00 |
|
comfyanonymous
|
73e3a9e676
|
Clamp output when rounding weight to prevent Nan.
|
2024-10-19 19:07:10 -04:00 |
|
comfyanonymous
|
7d2467e830
|
Some minor cleanups.
|
2024-10-05 13:22:39 -04:00 |
|
comfyanonymous
|
00a5d08103
|
Lower fp8 lora memory usage.
|
2024-09-03 01:25:05 -04:00 |
|
comfyanonymous
|
2ca8f6e23d
|
Make the stochastic fp8 rounding reproducible.
|
2024-08-26 15:12:06 -04:00 |
|
comfyanonymous
|
7985ff88b9
|
Use less memory in float8 lora patching by doing calculations in fp16.
|
2024-08-26 14:45:58 -04:00 |
|
comfyanonymous
|
4506ddc86a
|
Better subnormal fp8 stochastic rounding. Thanks Ashen.
|
2024-08-19 13:38:03 -04:00 |
|
comfyanonymous
|
22ec02afc0
|
Handle subnormal numbers in float8 rounding.
|
2024-08-19 05:51:08 -04:00 |
|
comfyanonymous
|
bb222ceddb
|
Fix loras having a weak effect when applied on fp8.
|
2024-08-17 15:20:17 -04:00 |
|