Layer Importance Estimation With Imprinting for Neural Network Quantization

Hongyang Liu, Sara Elkerdawy, Nilanjan Ray, Mostafa Elhoushi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2408-2417

Abstract


Neural network quantization has achieved a high compression rate using a fixed low bit-width representation of weights and activations while maintaining the accuracy of the high-precision original network. However, mixed-precision (per-layer bit-width precision) quantization requires careful tuning to maintain accuracy while achieving further compression and higher granularity than fixed-precision quantization. We propose an accuracy-aware criterion to quantify the layer's importance rank. Our method applies imprinting per layer which acts as a proxy module for accuracy estimation in an efficient way. We rank the layers based on the accuracy gain from previous modules and iteratively quantize first those with less accuracy gain. Previous mixed-precision methods either rely on expensive search techniques such as reinforcement learning (RL) or end-to-end optimization with a lack of interpretation to the quantization configuration scheme. Our method is a one-shot, efficient, accuracy-aware information estimation and thus draws better interpretability to the selected bit-width configuration.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2021_CVPR, author = {Liu, Hongyang and Elkerdawy, Sara and Ray, Nilanjan and Elhoushi, Mostafa}, title = {Layer Importance Estimation With Imprinting for Neural Network Quantization}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2408-2417} }