Post Training Mixed Precision Quantization of Neural Networks Using First-Order Information

Arun Chauhan, Utsav Tiwari, Vikram N R; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 1343-1352

Abstract


Quantization is an efficient way of downsizing both memory footprints and inference time of large size Deep Neural Networks (DNNs) and makes their application feasible on resource-constrained devices. However, quantizing all layers uniformly with ultra-low precision bits results in significant degradation in performance. A promising approach to address this problem is mixed-precision quantization where higher bit precisions are assigned to layers that are more sensitive. In this study, we introduce the method that uses first-order information (i.e. gradient) only for determining the neural network layers' sensitivity for mixed-precision quantization and shows that the proposed method is equally effective in performance and better in computation complexity with its counterpart methods which use second order information (i.e. hessian). Finally, we formulate the mixed precision problem as an Integer linear programming problem which uses proposed sensitivity metric and allocate the number of bits for each layer efficiently for a given model size. Furthermore, we only use post training quantization techniques to achieve the state of the art results in comparison to the popular methods for mixed precision quantization which fine-tunes the model with large training data. Extensive experiments conducted on benchmark vision neural network architectures using ImageNet dataset demonstrates the superiority over existing mixed-precision approaches. Our proposed method achieves better or comparable results for ResNet18 (0.65% accuracy-drop, for 8x weight compression), ResNet50 (0.69% accuracy-drop, for 8x weight compression), MobileNet-V2 (0.49% accuracy-drop, for 8x weight compression) and Inception-V3 (1.30% accuracy-drop, for 8x weight compression), compared to other state-of-the-art methods which requires retraining or uses hessian as a sensitivity metric for mixed precision quantization.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chauhan_2023_ICCV, author = {Chauhan, Arun and Tiwari, Utsav and R, Vikram N}, title = {Post Training Mixed Precision Quantization of Neural Networks Using First-Order Information}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {1343-1352} }