-
[pdf]
[arXiv]
[bibtex]@InProceedings{Elhoushi_2021_CVPR, author = {Elhoushi, Mostafa and Chen, Zihao and Shafiq, Farhan and Tian, Ye Henry and Li, Joey Yiwei}, title = {DeepShift: Towards Multiplication-Less Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2359-2368} }
DeepShift: Towards Multiplication-Less Neural Networks
Abstract
The high computation, memory, and power budgets of inferring convolutional neural networks (CNNs) are major bottlenecks of model deployment to edge computing platforms, e.g., mobile devices and IoT. Moreover, training CNNs is time and energy-intensive even on high-grade servers. Convolution layers and fully connected layers, because of their intense use of multiplications, are the dominant contributor to this computation budget. We propose to alleviate this problem by introducing two new operations: convolutional shifts and fully-connected shifts which replace multiplications with bitwise shift and sign flipping during both training and inference. During inference, both approaches require only 5 bits (or less) to represent the weights. This family of neural network architectures (that use convolutional shifts and fully connected shifts) is referred to as DeepShift models. We propose two methods to train DeepShift models: DeepShift-Q which trains regular weights constrained to powers of 2, and DeepShift-PS that trains the values of the shifts and sign flips directly. Very close accuracy, and in some cases higher accuracy, to baselines are achieved. Converting pre-trained 32-bit floating-point baseline models of ResNet18, ResNet50, VGG16, and GoogleNet to DeepShift and training them for 15 to 30 epochs, resulted in Top-1/Top-5 accuracies higher than that of the original model. Training the DeepShift versions of ResNet18 architecture from scratch, we obtained accuracies of 94.26% on the CIFAR10 dataset and Top-1/Top-5 accuracies of 65.32%/86.30% on the Imagenet dataset. Training the DeepShift version of VGG16 on ImageNet from scratch resulted in a drop of less than 0.3% in Top-5 accuracy. The code can be found at https://github.com/mostafaelhoushi/DeepShift.
Related Material