Self-Guided Novel View Synthesis via Elastic Displacement Network

Yicun Liu, Jiawei Zhang, Ye Ma, Jimmy Ren; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 164-173

Abstract


Synthesizing a novel view from different viewpoints has been an essential problem in 3D vision. Among a variety of view synthesis tasks, single image based view synthesis is particularly challenging. Recent works address this problem by a fixed number of image planes of discrete disparities, which tend to generate structurally inconsistent results on wide-baseline, scene-complicated datasets such as KITTI. In this paper, we propose the Self-Guided Elastic Displacement Network (SG-EDN), which explicitly models the geometric transformation by a novel non-discrete scene representation called layered displacement maps (LDM). To generate realistic views, we exploit the positional characteristics of the displacement maps and design a multi-scale structural pyramid for self-guided filtering on the displacement maps. To optimize efficiency and scene-adaptivity, we allow the effective range of each displacement map to be elastic, with fully learnable parameters. Experimental results confirm that our framework outperforms existing methods in both quantitative and qualitative tests.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Liu_2020_WACV,
author = {Liu, Yicun and Zhang, Jiawei and Ma, Ye and Ren, Jimmy},
title = {Self-Guided Novel View Synthesis via Elastic Displacement Network},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}