Binarized Convolutional Neural Networks With Separable Filters for Efficient Hardware Acceleration

Jeng-Hau Lin, Tianwei Xing, Ritchie Zhao, Zhiru Zhang, Mani Srivastava, Zhuowen Tu, Rajesh K. Gupta; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 27-35

Abstract


State-of-the-art convolutional neural networks are enormously costly in both compute and memory, demanding massively parallel GPUs for execution. Such networks strain the computational capabilities and energy available to embedded and mobile processing platforms, restricting their use in many important applications. In this paper, we propose BCNN with Separable Filters (BCNNw/SF), which applies Singular Value Decomposition (SVD) on BCNN kernels to further reduce computational and storage complexity. We provide a closed form of the gradient over SVD to calculate the exact gradient with respect to every binarized weight in backward propagation. We verify BCNNw/SF on the MNIST, CIFAR-10, and SVHN datasets, and implement an accelerator for CIFAR10 on FPGA hardware. Our BCNNw/SF accelerator realizes memory savings of 17% and execution time reduction of 31.3% compared to BCNN with only minor accuracy sacrifices.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lin_2017_CVPR_Workshops,
author = {Lin, Jeng-Hau and Xing, Tianwei and Zhao, Ritchie and Zhang, Zhiru and Srivastava, Mani and Tu, Zhuowen and Gupta, Rajesh K.},
title = {Binarized Convolutional Neural Networks With Separable Filters for Efficient Hardware Acceleration},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}