r/LocalLLaMA
·
13h ago
·
8
·
benchmark
inference
quantization
Empirical study showing KV cache quantization (q8_0, q4_0) has significant, model-dependent quality impact—contrary to conventional wisdom that q8_0 is "practically lossless." Gemma models show substantial degradation (KL 0.108-0.377 at q8_0) while Qwen remains robust (KL <0.04), with detailed methodology using KL divergence across 250K tokens across 6 categories, enabling engineers to make informed quantization tradeoff decisions.