xUnit: Learning a Spatial Activation Function for Efficient Image Restoration
Idan Kligvasser, Tamar Rott Shaham, Tomer Michaeli; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2433-2442
Abstract
In recent years, deep neural networks (DNNs) achieved unprecedented performance in many low-level vision tasks. However, state-of-the-art results are typically achieved by very deep networks, which can reach tens of layers with tens of millions of parameters. To make DNNs implementable on platforms with limited resources, it is necessary to weaken the tradeoff between performance and efficiency. In this paper, we propose a new activation unit, which is particularly suitable for image restoration problems. In contrast to the widespread per-pixel activation units, like ReLUs and sigmoids, our unit implements a learnable nonlinear function with spatial connections. This enables the net to capture much more complex features, thus requiring a significantly smaller number of layers in order to reach the same performance. We illustrate the effectiveness of our units through experiments with state-of-the-art nets for denoising, de-raining, and super resolution, which are already considered to be very small. With our approach, we are able to further reduce these models by nearly 50% without incurring any degradation in performance.
Related Material
[pdf]
[arXiv]
[video]
[
bibtex]
@InProceedings{Kligvasser_2018_CVPR,
author = {Kligvasser, Idan and Shaham, Tamar Rott and Michaeli, Tomer},
title = {xUnit: Learning a Spatial Activation Function for Efficient Image Restoration},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}