SegLoc: Learning Segmentation-Based Representations for Privacy-Preserving Visual Localization

Maxime Pietrantoni, Martin Humenberger, Torsten Sattler, Gabriela Csurka; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 15380-15391

Abstract


Inspired by properties of semantic segmentation, in this paper we investigate how to leverage robust image segmentation in the context of privacy-preserving visual localization. We propose a new localization framework, SegLoc, that leverages image segmentation to create robust, compact, and privacy-preserving scene representations, i.e., 3D maps. We build upon the correspondence-supervised, fine-grained segmentation approach from Larsson et al (ICCV'19), making it more robust by learning a set of cluster labels with discriminative clustering, additional consistency regularization terms and we jointly learn a global image representation along with a dense local representation. In our localization pipeline, the former will be used for retrieving the most similar images, the latter to refine the retrieved poses by minimizing the label inconsistency between the 3D points of the map and their projection onto the query image. In various experiments, we show that our proposed representation allows to achieve (close-to) state-of-the-art pose estimation results while only using a compact 3D map that does not contain enough information about the original images for an attacker to reconstruct personal information.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Pietrantoni_2023_CVPR, author = {Pietrantoni, Maxime and Humenberger, Martin and Sattler, Torsten and Csurka, Gabriela}, title = {SegLoc: Learning Segmentation-Based Representations for Privacy-Preserving Visual Localization}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {15380-15391} }