DFM: A Performance Baseline for Deep Feature Matching

Ufuk Efe, Kutalmis Gokalp Ince, Aydin Alatan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 4284-4293

Abstract


A novel image matching method is proposed that utilizes learned features extracted by an off-the-shelf deep neural network to obtain a promising performance. The proposed method uses pre-trained VGG architecture as a feature extractor and does not require any additional training specific to improve matching. Inspired by well-established concepts in the psychology area, such as the Mental Rotation paradigm, an initial warping is performed as a result of a preliminary geometric transformation estimate. These estimates are simply based on dense matching of nearest neighbors at the terminal layer of VGG network outputs of the images to be matched. After this initial alignment, the same approach is repeated again between reference and aligned images in a hierarchical manner to reach a good localization and matching performance. Our algorithm achieves 0.57 and 0.80 overall scores in terms of Mean Matching Accuracy (MMA) for 1 pixel and 2 pixels thresholds respectively on Hpatches dataset [4], which indicates a better performance than the state-of-the-art.

Related Material


[pdf]
[bibtex]
@InProceedings{Efe_2021_CVPR, author = {Efe, Ufuk and Ince, Kutalmis Gokalp and Alatan, Aydin}, title = {DFM: A Performance Baseline for Deep Feature Matching}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {4284-4293} }