Low-bit Quantization of Neural Networks for Efficient Inference

Eli Kravchik, Fan Yang, Pavel Kisilev, Yoni Choukroun; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Recent machine learning methods use increasingly large deep neural networks to achieve state of the art results in various tasks. The gains in performance come at the cost of a substantial increase in computation and storage requirements. This makes real-time implementations on limited resources hardware a challenging task. One popular approach to address this challenge is to perform low-bit precision computations via neural network quantization. However, aggressive quantization generally entails a severe penalty in terms of accuracy, and often requires retraining of the network, or resorting to higher bit precision quantization. In this paper, we formalize the linear quantization task as a Minimum Mean Squared Error (MMSE) problem for both weights and activations, allowing low-bit precision inference without the need for full network retraining. We propose the analysis and the optimization of constrained MSE problems for performant hardware aware quantization. The proposed approach allows 4 bits integer (INT4) quantization for deployment of pretrained models on limited hardware resources. Multiple experiments on various network architectures show that the suggested method yields state of the art results with minimal loss of tasks accuracy.

Related Material


[pdf]
[bibtex]
@InProceedings{Kravchik_2019_ICCV,
author = {Kravchik, Eli and Yang, Fan and Kisilev, Pavel and Choukroun, Yoni},
title = {Low-bit Quantization of Neural Networks for Efficient Inference},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}