UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution

Chang Chen, Xinmei Tian, Zhiwei Xiong, Feng Wu; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1069-1076

Abstract


Recently, image super-resolution (SR) using convolutional neural networks (CNNs) have achieved remarkable performance. However, there is a tradeoff between performance and speed of SR, depending on whether feature representation and learning are conducted in high-resolution (HR) or low-resolution (LR) space. Generally, to pursue real-time SR, the number of parameters in CNNs has to be restricted, which results in performance degradation. In this paper, we propose a compact and efficient feature representation for real-time SR, named up-down network (UDNet). Specifically, a novel hourglass-shape structure is introduced by combining transposed convolution and spatial aggregation. This structure enables the network to transfer the feature representations between LR and HR spaces multiple times to learn a better mapping. Comprehensive experiments demonstrate that, compared with existing CNN models, UDNet achieves real-time SR without performance degradation on widely used benchmarks.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2017_ICCV,
author = {Chen, Chang and Tian, Xinmei and Xiong, Zhiwei and Wu, Feng},
title = {UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}