EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis

Mehdi S. M. Sajjadi, Bernhard Scholkopf, Michael Hirsch; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4491-4500

Abstract


Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Sajjadi_2017_ICCV,
author = {Sajjadi, Mehdi S. M. and Scholkopf, Bernhard and Hirsch, Michael},
title = {EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}