PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 652-660

Abstract


Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Qi_2017_CVPR,
author = {Qi, Charles R. and Su, Hao and Mo, Kaichun and Guibas, Leonidas J.},
title = {PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}