Pose Correction for Highly Accurate Visual Localization in Large-Scale Indoor Spaces

Janghun Hyeon, Joohyung Kim, Nakju Doh; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15974-15983

Abstract


Indoor visual localization is significant for various applications such as autonomous robots, augmented reality, and mixed reality. Recent advances in visual localization have demonstrated their feasibility in large-scale indoor spaces through coarse-to-fine methods that typically employ three steps: image retrieval, pose estimation, and pose selection. However, further research is needed to improve the accuracy of large-scale indoor visual localization. We demonstrate that the limitations in the previous methods can be attributed to the sparsity of image positions in the database, which causes view-differences between a query and a retrieved image from the database. In this paper, to address this problem, we propose a novel module, named pose correction, that enables re-estimation of the pose with local feature matching in a similar view by reorganizing the local features. This module enhances the accuracy of the initially estimated pose and assigns more reliable ranks. Furthermore, the proposed method achieves a new state-of-the-art performance with an accuracy of more than 90% within 1.0m in the challenging indoor benchmark dataset InLoc for the first time.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Hyeon_2021_ICCV, author = {Hyeon, Janghun and Kim, Joohyung and Doh, Nakju}, title = {Pose Correction for Highly Accurate Visual Localization in Large-Scale Indoor Spaces}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15974-15983} }