Learning Strict Identity Mappings in Deep Residual Networks

Xin Yu, Zhiding Yu, Srikumar Ramalingam; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4432-4440

Abstract


A family of super deep networks, referred to as residual networks or ResNet~cite{he2016deep}, achieved record-beating performance in various visual tasks such as image recognition, object detection, and semantic segmentation. The ability to train very deep networks naturally pushed the researchers to use enormous resources to achieve the best performance. Consequently, in many applications super deep residual networks were employed for just a marginal improvement in performance. In this paper, we propose $epsilon$-ResNet that allows us to automatically discard redundant layers, which produces responses that are smaller than a threshold $epsilon$, without any loss in performance. The $epsilon$-ResNet architecture can be achieved using a few additional rectified linear units in the original ResNet. Our method does not use any additional variables nor numerous trials like other hyper-parameter optimization techniques. The layer selection is achieved using a single training process and the evaluation is performed on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. In some instances, we achieve about 80% reduction in the number of parameters.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yu_2018_CVPR,
author = {Yu, Xin and Yu, Zhiding and Ramalingam, Srikumar},
title = {Learning Strict Identity Mappings in Deep Residual Networks},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}