-
[pdf]
[bibtex]@InProceedings{van_Baalen_2022_CVPR, author = {van Baalen, Mart and Kahne, Brian and Mahurin, Eric and Kuzmin, Andrey and Skliar, Andrii and Nagel, Markus and Blankevoort, Tijmen}, title = {Simulated Quantization, Real Power Savings}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {2757-2761} }
Simulated Quantization, Real Power Savings
Abstract
Reduced precision hardware-based matrix multiplication accelerators are commonly employed to reduce power consumption of neural network inference. Multiplier designs used in such accelerators possess an interesting property: When the same bit is 0 for two consecutive compute cycles, the multiplier consumes less power. In this paper we show that this effect can be used to reduce power consumption of neural networks by simulating low bit-width quantization on higher bit-width hardware. We show that simulating 4 bit quantization on 8 bit hardware can yield up to 17% relative reduction in power consumption on commonly used networks. Furthermore, we show that in this context, bit operations (BOPs) are a good proxy for power efficiency, and that learning mixed-precision configurations that target lower BOPs can achieve better trade-offs between accuracy and power efficiency.
Related Material