OrienterNet: Visual Localization in 2D Public Maps With Neural Matching

Paul-Edouard Sarlin, Daniel DeTone, Tsun-Yi Yang, Armen Avetisyan, Julian Straub, Tomasz Malisiewicz, Samuel Rota Bulò, Richard Newcombe, Peter Kontschieder, Vasileios Balntas; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 21632-21642

Abstract


Humans can orient themselves in their 3D environments using simple 2D maps. Differently, algorithms for visual localization mostly rely on complex 3D point clouds that are expensive to build, store, and maintain over time. We bridge this gap by introducing OrienterNet, the first deep neural network that can localize an image with sub-meter accuracy using the same 2D semantic maps that humans use. OrienterNet estimates the location and orientation of a query image by matching a neural Bird's-Eye View with open and globally available maps from OpenStreetMap, enabling anyone to localize anywhere such maps are available. OrienterNet is supervised only by camera poses but learns to perform semantic matching with a wide range of map elements in an end-to-end manner. To enable this, we introduce a large crowd-sourced dataset of images captured across 12 cities from the diverse viewpoints of cars, bikes, and pedestrians. OrienterNet generalizes to new datasets and pushes the state of the art in both robotics and AR scenarios. The code is available at https://github.com/facebookresearch/OrienterNet

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Sarlin_2023_CVPR, author = {Sarlin, Paul-Edouard and DeTone, Daniel and Yang, Tsun-Yi and Avetisyan, Armen and Straub, Julian and Malisiewicz, Tomasz and Bul\`o, Samuel Rota and Newcombe, Richard and Kontschieder, Peter and Balntas, Vasileios}, title = {OrienterNet: Visual Localization in 2D Public Maps With Neural Matching}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {21632-21642} }