Image Super-Resolution via Residual Block Attention Networks

Tao Dai, Hua Zha, Yong Jiang, Shu-Tao Xia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Recently, deep convolutional neural networks (CNNs) have been widely used in image super-resolution (SR). Most state-of-the-art CNN-based SR methods focus on improving the performance by designing deeper and wider networks. However, 1) using deeper networks makes the network difficult to train; 2) the relationships of features have not been thoroughly explored, therefore hindering the representational power of CNNs. In this paper, we investigate an effective end-to-end neural structure for more powerful feature expression and feature correlation learning. Specifically, we propose a residual block attention networks (RBAN) framework, which consists of two types of attention modules to efficiently exploit the feature correlations in spatial and channel dimensions for stronger feature expression. The proposed RBAN framework is constituted of a series of residual attention groups, which is further composed of several repeated residual block attention block to not only fully exploit the hierarchical features from different convolutional layers but also efficiently capture the contextual information and interdependencies among channels. Experimental results demonstrate the superiority of our RBAN network over state-of-the-art SR methods in terms of both quantitive and visual quality.

Related Material


[pdf]
[bibtex]
@InProceedings{Dai_2019_ICCV,
author = {Dai, Tao and Zha, Hua and Jiang, Yong and Xia, Shu-Tao},
title = {Image Super-Resolution via Residual Block Attention Networks},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}