Learning 3D Keypoint Descriptors for Non-Rigid Shape Matching

Hanyu Wang, Jianwei Guo, Dong-Ming Yan, Weize Quan, Xiaopeng Zhang; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3-19

Abstract


In this paper, we present a novel deep learning framework that derives discriminative local descriptors for 3D surface shapes. In contrast to previous convolutional neural networks (CNNs) that rely on rendering multi-view images or extracting intrinsic shape properties, we parameterize the multi-scale localized neighborhoods of a keypoint into regular 2D grids, which are termed as `geometry images'. The benefits of such geometry images include retaining sufficient geometric information, as well as allowing the usage of standard CNNs. Specifically, we leverage a triplet network to perform deep metric learning, which takes a set of triplets as input, and a newly designed triplet loss function is minimized to distinguish between similar and dissimilar pairs of keypoints. At the testing stage, given a geometry image of a point of interest, our network outputs a discriminative local descriptor for it. Experimental results for non-rigid shape matching on several benchmarks demonstrate the superior performance of our learned descriptors over traditional descriptors and the state-of-the-art learning-based alternatives.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2018_ECCV,
author = {Wang, Hanyu and Guo, Jianwei and Yan, Dong-Ming and Quan, Weize and Zhang, Xiaopeng},
title = {Learning 3D Keypoint Descriptors for Non-Rigid Shape Matching},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}