Single-View View Synthesis With Multiplane Images

Richard Tucker, Noah Snavely; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 551-560

Abstract


A recent strand of work in view synthesis uses deep learning to generate multiplane images--a camera-centric, layered 3D representation--given two or more input images at known viewpoints. We apply this representation to single-view view synthesis, a problem which is more challenging but has potentially much wider application. Our method learns to predict a multiplane image directly from a single image input, and we introduce scale-invariant view synthesis for supervision, enabling us to train on online video. We show this approach is applicable to several different datasets, that it additionally generates reasonable depth maps, and that it learns to fill in content behind the edges of foreground objects in background layers.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Tucker_2020_CVPR,
author = {Tucker, Richard and Snavely, Noah},
title = {Single-View View Synthesis With Multiplane Images},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}