-
[pdf]
[supp]
[bibtex]@InProceedings{Germain_2022_CVPR, author = {Germain, Hugo and DeTone, Daniel and Pascoe, Geoffrey and Schmidt, Tanner and Novotny, David and Newcombe, Richard and Sweeney, Chris and Szeliski, Richard and Balntas, Vasileios}, title = {Feature Query Networks: Neural Surface Description for Camera Pose Refinement}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {5071-5081} }
Feature Query Networks: Neural Surface Description for Camera Pose Refinement
Abstract
Accurate 6-DoF camera pose estimation in known environments can be a very challenging task, especially when the query image was captured at viewpoints strongly differing from the set of reference camera poses. While structure-based methods have proved to deliver accurate camera pose estimates, they rely on pre-computed 3D descriptors coming from reference images often misaligned with query images. This descriptor discrepancy can subsequently harm the downstream camera pose estimation task. In this paper we introduce the Feature Query Network (FQN), a ray-based descriptor regressor that can be used to query descriptors at known 3D locations under novel viewpoints. We show that the FQN is able to model viewpoint-dependency of high-dimensional state-of-the-art keypoint descriptors and bring significant relative improvements to structure-based visual localization baselines.
Related Material