Unsupervised Learning of Geometric Sampling Invariant Representations for 3D Point Clouds
Point clouds consist of a discrete set of points irregularly sampled from continuous 3D objects. Most existing approaches for point cloud learning are in (semi)-supervised fashions, which nevertheless require costly human annotations. To this end, we propose a novel unsupervised learning of geometric sampling invariant representations, aiming to learn intrinsic feature representations of point clouds based on that the geometry of one object can be sampled in various patterns and densities into different forms of point clouds. In particular, we exploit invariant representations at multiple hierarchies: the low-resolution invariance and original-resolution invariance. To learn invariance at a lower resolution, we subsample the input point cloud in distinct patterns, and maximize the mutual information among the subsampled variants. Further, to learn invariance at the original resolution, we increase the resolution of the subsampled point clouds to the original resolution of the input based on the learned features, and minimize the distance between the input and each of the upsampled versions. In experiments, we apply the learned representations to representative downstream tasks of point clouds, and results on point cloud classification, segmentation and upsampling demonstrate the superiority of the proposed model.