Geometry-Aware Deep Network for Single-Image Novel View Synthesis
Miaomiao Liu, Xuming He, Mathieu Salzmann; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4616-4624
Abstract
This paper tackles the problem of novel view synthesis from a single image. In particular, we target real-world scenes with rich geometric structure, a challenging task due to the large appearance variations of such scenes and the lack of simple 3D models to represent them. Modern, learning-based approaches mostly focus on appearance to synthesize novel views and thus tend to generate predictions that are inconsistent with the underlying scene structure. By contrast, in this paper, we propose to exploit the 3D geometry of the scene to synthesize a novel view. Specifically, we approximate a real-world scene by a fixed number of planes, and learn to predict a set of homographies and their corresponding region masks to transform the input image into a novel view. To this end, we develop a new region-aware geometric transform network that performs these multiple tasks in a common framework. Our results on the outdoor KITTI and the indoor ScanNet datasets demonstrate the effectiveness of our network to generate high-quality synthetic views that respect the scene geometry, thus outperforming the state-of-the-art methods.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Liu_2018_CVPR,
author = {Liu, Miaomiao and He, Xuming and Salzmann, Mathieu},
title = {Geometry-Aware Deep Network for Single-Image Novel View Synthesis},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}