ALPS: Adaptive Quantization of Deep Neural Networks With GeneraLized PositS

Hamed F. Langroudi, Vedant Karia, Zachariah Carmichael, Abdullah Zyarah, Tej Pandit, John L. Gustafson, Dhireesha Kudithipudi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 3100-3109

Abstract


In this paper, a new adaptive quantization algorithm for generalized posit format is presented, to optimally represent the dynamic range and distribution of deep neural network parameters. Adaptation is achieved by minimizing the intra-layer posit quantization error with a compander. The efficacy of the proposed quantization algorithm is studied within a new low-precision framework, ALPS, on ResNet-50 and EfficientNet models for classification tasks. Results assert that the accuracy and energy dissipation of low-precision DNNs using generalized posits outperform other well-known numerical formats, including standard posits.

Related Material


[pdf]
[bibtex]
@InProceedings{Langroudi_2021_CVPR, author = {Langroudi, Hamed F. and Karia, Vedant and Carmichael, Zachariah and Zyarah, Abdullah and Pandit, Tej and Gustafson, John L. and Kudithipudi, Dhireesha}, title = {ALPS: Adaptive Quantization of Deep Neural Networks With GeneraLized PositS}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3100-3109} }