QAQ: Quality Adaptive Quantization for LLM KV Cache

Wen Cheng, Shichen Dong, Jiayu Qin, Wei Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2025, pp. 2563-2571

Abstract


The emergence of LLMs has ignited a fresh surge of breakthroughs in NLP applications, particularly in domains such as question-answering systems and text generation. As the need for longer context grows, a significant bottleneck in model deployment emerges due to the linear expansion of the Key-Value (KV) cache with the context length. Existing methods primarily rely on various hypotheses, such as sorting the KV cache based on attention scores for replacement or eviction, to compress the KV cache and improve model throughput. However, heuristics used by these strategies may wrongly evict essential KV cache, which can significantly degrade model performance. In this paper, we propose QAQ, a Quality Adaptive Quantization scheme for the KV cache. We theoretically demonstrate that key cache and value cache exhibit distinct sensitivities to quantization, leading to the formulation of separate quantization strategies for their non-uniform quantization. Through the integration of dedicated outlier handling, as well as an improved attention-aware approach, QAQ achieves up to 10xthe compression ratio of the KV cache size with a negligible impact on model performance. QAQ significantly reduces the practical hurdles of deploying LLMs, opening up new possibilities for longer-context applications. We make our code publicly available to support reproducibility and promote broader awareness within the community.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Cheng_2025_ICCV, author = {Cheng, Wen and Dong, Shichen and Qin, Jiayu and Wang, Wei}, title = {QAQ: Quality Adaptive Quantization for LLM KV Cache}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {2563-2571} }