End2End Multi-View Feature Matching with Differentiable Pose Optimization

Barbara Roessle, Matthias Nießner; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 477-487

Abstract


Erroneous feature matches have severe impact on subsequent camera pose estimation and often require additional, time-costly measures, like RANSAC, for outlier rejection. Our method tackles this challenge by addressing feature matching and pose optimization jointly. To this end, we propose a graph attention network to predict image correspondences along with confidence weights. The resulting matches serve as weighted constraints in a differentiable pose estimation. Training feature matching with gradients from pose optimization naturally learns to down-weight outliers and boosts pose estimation on image pairs compared to SuperGlue by 6.7% on ScanNet. At the same time, it reduces the pose estimation time by over 50% and renders RANSAC iterations unnecessary. Moreover, we integrate information from multiple views by spanning the graph across multiple frames to predict the matches all at once. Multi-view matching combined with end-to-end training improves the pose estimation metrics on Matterport3D by 18.5% compared to SuperGlue.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Roessle_2023_ICCV, author = {Roessle, Barbara and Nie{\ss}ner, Matthias}, title = {End2End Multi-View Feature Matching with Differentiable Pose Optimization}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {477-487} }