Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers

Natalia Frumkin, Dibakar Gope, Diana Marculescu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 16978-16988

Abstract


Quantization scale and bit-width are the most important parameters when considering how to quantize a neural network. Prior work focuses on optimizing quantization scales in a global manner through gradient methods (gradient descent & Hessian analysis). Yet, when applying perturbations to quantization scales, we observe a very jagged, highly non-smooth test loss landscape. In fact, small perturbations in quantization scale can greatly affect accuracy, yielding a 0.5-0.8% accuracy boost in 4-bit quantized vision transformers (ViTs). In this regime, gradient methods break down, since they cannot reliably reach local minima. In our work, dubbed Evol-Q, we use evolutionary search to effectively traverse the non-smooth landscape. Additionally, we propose using an infoNCE loss, which not only helps combat overfitting on the small (1,000 images) calibration dataset but also makes traversing such a highly non-smooth surface easier. Evol-Q improves the top-1 accuracy of a fully quantized ViT-Base by 10.30%, 0.78%, and 0.15% for 3-bit, 4-bit, and 8-bit weight quantization levels. Extensive experiments on a variety of CNN and ViT architectures further demonstrate its robustness in extreme quantization scenarios.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Frumkin_2023_ICCV, author = {Frumkin, Natalia and Gope, Dibakar and Marculescu, Diana}, title = {Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {16978-16988} }