Local Supports Global: Deep Camera Relocalization With Sequence Enhancement

Fei Xue, Xin Wang, Zike Yan, Qiuyuan Wang, Junqiu Wang, Hongbin Zha; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 2841-2850


We propose to leverage the local information in a image sequence to support global camera relocalization. In contrast to previous methods that regress global poses from single images, we exploit the spatial-temporal consistency in sequential images to alleviate uncertainty due to visual ambiguities by incorporating a visual odometry (VO) component. Specifically, we introduce two effective steps called content-augmented pose estimation and motion-based refinement. The content-augmentation step focuses on alleviating the uncertainty of pose estimation by augmenting the observation based on the co-visibility in local maps built by the VO stream. Besides, the motion-based refinement is formulated as a pose graph, where the camera poses are further optimized by adopting relative poses provided by the VO component as additional motion constraints. Thus, the global consistency can be guaranteed. Experiments on the public indoor 7-Scenes and outdoor Oxford RobotCar benchmark datasets demonstrate that benefited from local information inherent in the sequence, our approach outperforms state-of-the-art methods, especially in some challenging cases, e.g., insufficient texture, highly repetitive textures, similar appearances, and over-exposure.

Related Material

[pdf] [supp]
author = {Xue, Fei and Wang, Xin and Yan, Zike and Wang, Qiuyuan and Wang, Junqiu and Zha, Hongbin},
title = {Local Supports Global: Deep Camera Relocalization With Sequence Enhancement},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}