Spatio-Temporal Self-Supervised Representation Learning for 3D Point Clouds

Siyuan Huang, Yichen Xie, Song-Chun Zhu, Yixin Zhu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 6535-6545

Abstract


To date, various 3D scene understanding tasks still lack practical and generalizable pre-trained models, primarily due to the intricate nature of 3D scene understanding tasks and their immerse variations due to camera views, lighting, occlusions, etc. In this paper, we tackle this immanent challenge by introducing a spatio-temporal representation learning (STRL) framework, capable of learning from unlabeled 3D point clouds in a self-supervised fashion. Inspired by how infants learn from visual data in-the-wild, we explore the rich spatio-temporal cues derived from the 3D data. Specifically, STRL takes two temporal-correlated frames from a 3D point cloud sequence as the input, transforms it with spatial data augmentation, and learns the invariant representation self-supervisedly. To corroborate the efficacy of STRL, we conduct extensive experiments on synthetic, indoor, and outdoor datasets. Experimental results demonstrate that, compared with supervised learning methods, the learned self-supervised representation facilitates various models to attain comparable or even better performances while capable of generalizing pre-trained models to downstream tasks, including 3D shape classification, 3D object detection, and 3D semantic segmentation. Moreover, spatio-temporal contextual cues embedded in 3D point clouds significantly improve the learned representations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Huang_2021_ICCV, author = {Huang, Siyuan and Xie, Yichen and Zhu, Song-Chun and Zhu, Yixin}, title = {Spatio-Temporal Self-Supervised Representation Learning for 3D Point Clouds}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {6535-6545} }