PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors
Haowen Deng, Tolga Birdal, Slobodan Ilic ; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 602-618
Abstract
We present PPF-FoldNet for unsupervised learning of 3D local descriptors on pure point cloud geometry. Based on the folding-based auto-encoding of well known point pair features, PPF-FoldNet offers many desirable properties: it necessitates neither supervision, nor a sensitive local reference frame, benefits from point-set sparsity, is end-to-end, fast, and can extract powerful rotation invariant descriptors. Thanks to a novel feature visualization, its evolution can be monitored to provide interpretable insights. Our extensive experiments demonstrate that despite having six degree-of-freedom invariance and lack of training labels, our network achieves state of the art results in standard benchmark datasets and outperforms its competitors when rotations and varying point densities are present. PPF-FoldNet achieves $9%$ higher recall on standard benchmarks, $23%$ higher recall when rotations are introduced into the same datasets and finally, a margin of $>35%$ is attained when point density is significantly decreased.
Related Material
[pdf]
[
bibtex]
@InProceedings{Deng_2018_ECCV,
author = {Deng, Haowen and Birdal, Tolga and Ilic, Slobodan},
title = {PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}