Making Models Shallow Again: Jointly Learning To Reduce Non-Linearity and Depth for Latency-Efficient Private Inference

Souvik Kundu, Yuke Zhang, Dake Chen, Peter A. Beerel; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 4685-4689

Abstract


Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference. In this paper, we present a model optimization method that allows a model to learn to be shallow. In particular, we leverage the ReLU sensitivity of a convolutional block to remove a ReLU layer and merge its succeeding and preceding convolution layers to a shallow block. Unlike existing ReLU reduction methods, our joint reduction method can yield models with improved reduction of both ReLUs and linear operations by up to 1.73x and 1.47x, respectively, evaluated with ResNet18 on CIFAR-100 without any significant accuracy-drop.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kundu_2023_CVPR, author = {Kundu, Souvik and Zhang, Yuke and Chen, Dake and Beerel, Peter A.}, title = {Making Models Shallow Again: Jointly Learning To Reduce Non-Linearity and Depth for Latency-Efficient Private Inference}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4685-4689} }