ReSprop: Reuse Sparsified Backpropagation
Negar Goli, Tor M. Aamodt; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1548-1558
Abstract
The success of Convolutional Neural Networks (CNNs) in various applications is accompanied by a significant increase in computation and training time. In this work, we focus on accelerating training by observing that about 90% of gradients are reusable during training. Leveraging this observation, we propose a new algorithm, Reuse-Sparse-Backprop (ReSprop), as a method to sparsify gradient vectors during CNN training. ReSprop maintains state-of-the-art accuracy on CIFAR-10, CIFAR-100, and ImageNet datasets with less than 1.1% accuracy loss while enabling a reduction in back-propagation computations by a factor of 10x resulting in a 2.7x overall speedup in training. As the computation reduction introduced by Re-Sprop is accomplished by introducing fine-grained sparsity that reduces computation efficiency on GPUs, we introduce a generic sparse convolution neural network accelerator (GSCN), which is designed to accelerate sparse back-propagation convolutions. When combined with ReSprop, GSCN achieves 8.0x and 7.2x speedup in the backward pass on ResNet34 and VGG16 versus a GTX 1080 Ti GPU.
Related Material
[pdf]
[
bibtex]
@InProceedings{Goli_2020_CVPR,
author = {Goli, Negar and Aamodt, Tor M.},
title = {ReSprop: Reuse Sparsified Backpropagation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}