Viewpoint Invariant Dense Matching for Visual Geolocalization

Gabriele Berton, Carlo Masone, Valerio Paolicelli, Barbara Caputo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12169-12178

Abstract


In this paper we propose a novel method for image matching based on dense local features and tailored for visual geolocalization. Dense local features matching is robust against changes in illumination and occlusions, but not against viewpoint shifts which are a fundamental aspect of geolocalization. Our method, called GeoWarp, directly embeds invariance to viewpoint shifts in the process of extracting dense features. This is achieved via a trainable module which learns from the data an invariance that is meaningful for the task of recognizing places. We also devise a new self-supervised loss and two new weakly supervised losses to train this module using only unlabeled data and weak labels. GeoWarp is implemented efficiently as a re-ranking method that can be easily embedded into pre-existing visual geolocalization pipelines. Experimental validation on standard geolocalization benchmarks demonstrates that GeoWarp boosts the accuracy of state-of-the-art retrieval architectures. The code and trained models will be released upon acceptance of this paper.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Berton_2021_ICCV, author = {Berton, Gabriele and Masone, Carlo and Paolicelli, Valerio and Caputo, Barbara}, title = {Viewpoint Invariant Dense Matching for Visual Geolocalization}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12169-12178} }