Monocular Neural Image Based Rendering With Continuous View Control

Xu Chen, Jie Song, Otmar Hilliges; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4090-4100

Abstract


We propose a method to produce a continuous stream of novel views under fine-grained (e.g., 1 degree step-size) camera control at interactive rates. A novel learning pipeline determines the output pixels directly from the source color. Injecting geometric transformations, including perspective projection, 3D rotation and translation into the network forces implicit reasoning about the underlying geometry. The latent 3D geometry representation is compact and meaningful under 3D transformation, being able to produce geometrically accurate views for both single objects and natural scenes. Our experiments show that both proposed components, the transforming encoder-decoder and depth-guided appearance mapping, lead to significantly improved generalization beyond the training views and in consequence to more accurate view synthesis under continuous 6-DoF camera control. Finally, we show that our method outperforms state-of-the-art baseline methods on public datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2019_ICCV,
author = {Chen, Xu and Song, Jie and Hilliges, Otmar},
title = {Monocular Neural Image Based Rendering With Continuous View Control},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}