Improved Descriptors for Patch Matching and Reconstruction

Rahul Mitra, Jiakai Zhang, Sanath Narayan, Shuaib Ahmed, Sharat Chandran, Arjun Jain; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1023-1031

Abstract


We propose a convolutional neural network (ConvNet) based approach for learning local image descriptors which can be used for significantly improved patch matching and 3D reconstructions. A multi-resolution ConvNet is used for learning keypoint descriptors. We also propose a new dataset consisting of an order of magnitude more number of scenes, images, and positive and negative correspondences compared to the currently available Multi-View Stereo (MVS) dataset. The new dataset also has better coverage of the overall viewpoint, scale, and lighting changes in comparison to the MVS dataset. We evaluate our approach on publicly available datasets, such as Oxford Affine Covariant Regions Dataset (ACRD), MVS, Synthetic and Strecha datasets to quantify the image descriptor performance. We evaluate patch matching performance and 3D reconstruction task. Experiments show that the proposed descriptor outperforms the current state-of-the-art descriptors in both the evaluation tasks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Mitra_2017_ICCV,
author = {Mitra, Rahul and Zhang, Jiakai and Narayan, Sanath and Ahmed, Shuaib and Chandran, Sharat and Jain, Arjun},
title = {Improved Descriptors for Patch Matching and Reconstruction},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}