Residual Dense Network for Image Super-Resolution

Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2472-2481


In this paper, we propose dense feature fusion (DFF) for image super-resolution (SR). As the same content in different natural images often have various scales and angles of view, jointly leaning hierarchical features is essential for image SR. On the other hand, very deep convolutional neural network (CNN) has recently achieved great success for image SR and offered hierarchical features as well. However, most of deep CNN based SR models neglect to jointly make full use of the hierarchical features. In addition, dense connected layers would allow the network to be deeper, efficient to train, and more powerful. To embrace these observations, in our proposed DFF model, we fully exploit all the meaningful convolutional features in local and global manners. Specifically, we use dense connected convolutional layers to extract abundant local features. We use local feature fusion to adaptively learn more efficient features from preceding and current local features. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Extensive experiments on benchmark datasets show that our DFF achieves favorable performance against state-of-the-art methods quantitatively and visually.

Related Material

[pdf] [arXiv]
author = {Zhang, Yulun and Tian, Yapeng and Kong, Yu and Zhong, Bineng and Fu, Yun},
title = {Residual Dense Network for Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}