Automated Log-Scale Quantization for Low-Cost Deep Neural Networks

Sangyun Oh, Hyeonuk Sim, Sugil Lee, Jongeun Lee; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 742-751

Abstract


Quantization plays an important role in deep neural network (DNN) hardware. In particular, logarithmic quantization has multiple advantages for DNN hardware implementations, and its weakness in terms of lower performance at high precision compared with linear quantization has been recently remedied by what we call selective two-word logarithmic quantization (STLQ). However, there is a lack of training methods designed for STLQ or even logarithmic quantization in general. In this paper we propose a novel STLQ-aware training method, which significantly outperforms the previous state-of-the-art training method for STLQ. Moreover, our training results demonstrate that with our new training method, STLQ applied to weight parameters of ResNet-18 can achieve the same level of performance as state-of-the-art quantization method, APoT, at 3-bit precision. We also apply our method to various DNNs in image enhancement and semantic segmentation, showing competitive results.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Oh_2021_CVPR, author = {Oh, Sangyun and Sim, Hyeonuk and Lee, Sugil and Lee, Jongeun}, title = {Automated Log-Scale Quantization for Low-Cost Deep Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {742-751} }