Automated Virtual Navigation and Monocular Localization of Indoor Spaces From Videos

Qiong Wu, Ambrose Li; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1524-1532

Abstract


3D virtual navigation and localization in large indoor spaces (i.e., shopping malls and offices) are usually two separate studied problems. In this paper, we propose an automated framework to publish both 3D virtual navigation and monocular localization services that only require videos (or burst of images) of the environment as input. The framework can unify two problems as one because the collected data are highly utilized for both problems, 3D visual model reconstruction and training data for monocular localization. The power of our approach is that it does not need any human label data and instead automates the process of two separate services based on raw video (or burst of images) data captured by a common mobile device. We build a prototype system that publishes both virtual navigation and localization services for a shopping mall using raw video (or burst of images) data as inputs. Two web applications are developed utilizing two services. One allows navigation in 3D following the original video traces, and user can also stop at any time to explore in 3D space. One allows a user to acquire his/her location by uploading an image of the venue. Because of low barrier of data acquirement, this makes our system widely applicable to a variety of domains and significantly reduces service cost for potential customers.

Related Material


[pdf]
[bibtex]
@InProceedings{Wu_2018_CVPR_Workshops,
author = {Wu, Qiong and Li, Ambrose},
title = {Automated Virtual Navigation and Monocular Localization of Indoor Spaces From Videos},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}