360 Panorama Synthesis from a Sparse Set of Images with Unknown Field of View

Julius Surya Sumantri, In Kyu Park; The IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 2386-2395

Abstract


360 images represent scenes captured in all possible viewing directions and enable viewers to navigate freely around the scene thereby providing an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only certain parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images) is proposed. The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512x1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. Results also show that the proposed method outperforms the alternative method and can be generalized for non-panoramic scenes and images captured by a smartphone camera.

Related Material


[pdf]
[bibtex]
@InProceedings{Sumantri_2020_WACV,
author = {Sumantri, Julius Surya and Park, In Kyu},
title = {360 Panorama Synthesis from a Sparse Set of Images with Unknown Field of View},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}