FormResNet: Formatted Residual Learning for Image Restoration

Jianbo Jiao, Wei-Chih Tu, Shengfeng He, Rynson W. H. Lau; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 38-46

Abstract


In this paper, we propose a deep CNN to tackle the image restoration problem by learning the structured residual. Previous deep learning based methods directly learn the mapping from corrupted images to clean images, and may suffer from the gradient exploding/vanishing problems of deep neural networks. We propose to address the image restoration problem by learning the structured details and recovering the latent clean image together, from the shared information between the corrupted image and the latent image. In addition, instead of learning the pure difference (corruption), we propose to add a "residual formatting layer" to format the residual to structured information, which allows the network to converge faster and boosts the performance. Furthermore, we propose a cross-level loss net to ensure both pixel-level accuracy and semantic-level visual quality. Evaluations on public datasets show that the proposed method outperforms existing approaches quantitatively and qualitatively.

Related Material


[pdf]
[bibtex]
@InProceedings{Jiao_2017_CVPR_Workshops,
author = {Jiao, Jianbo and Tu, Wei-Chih and He, Shengfeng and Lau, Rynson W. H.},
title = {FormResNet: Formatted Residual Learning for Image Restoration},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}