PolarQuant: Quantizing KV Caches with Polar Transformation

Insu Han
Amin Karbasi
Praneeth Kacham
Amir Zandieh
2025

Abstract

Large language models (LLMs) require significant memory to store Key-Value (KV) embeddings in their KV cache, especially when handling long-range contexts. Quantization of these KV embeddings is a common technique to reduce memory consumption.
This work introduces PolarQuant, a novel quantization method employing random preconditioning and polar transformation. Our method first preconditions the embedding vectors using a random projection matrix. Then, we transform these vectors into polar coordinates and quantize the resulting polar representation.
Our key insight is that, after random preconditioning, the angles in the polar representation exhibit a tightly bounded and concentrated distribution with an analytically computable form. This eliminates the need for explicit normalization, a computationally expensive step required by traditional quantization methods.
Normalization introduces significant memory overhead because quantization parameters (e.g., zero point and scale) must be stored in full precision for each data block. This can add 1 to 2 bits per quantized value, depending on the block size. PolarQuant bypasses this normalization step, enabling substantial memory savings.
Empirical evaluation demonstrates that PolarQuant achieves lower memory overheads than existing normalization-based KV quantization techniques. Moreover, it improves performance across various generation tasks, particularly those involving long-context understanding.