A Deep Convolutional Neural Network With Selection Units for Super-Resolution

Jae-Seok Choi, Munchurl Kim; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 154-160

Abstract


Rectified linear units (ReLU) are known to be effective in many deep learning methods. Inspired by linear-mapping technique used in other super-resolution (SR) methods, we reinterpret ReLU into point-wise multiplication of an identity mapping and a switch, and finally present a novel nonlinear unit, called a selection unit (SU). While conventional ReLU has no direct control through which data is passed, the proposed SU optimizes this on-off switching control, and is therefore capable of better handling nonlinearity functionality than ReLU in a more flexible way. Our proposed deep network with SUs, called SelNet, was top-5th ranked in NTIRE2017 Challenge, which has a much lower computation complexity compared to the top-4 entries. Further experiment results show that our proposed SelNet outperforms our baseline only with ReLU (without SUs), and other state-of-the-art deep-learning-based SR methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Choi_2017_CVPR_Workshops,
author = {Choi, Jae-Seok and Kim, Munchurl},
title = {A Deep Convolutional Neural Network With Selection Units for Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}