Spatial-Temporal Transformer for 3D Point Cloud Sequences

Yimin Wei, Hao Liu, Tingting Xie, Qiuhong Ke, Yulan Guo; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1171-1180

Abstract


Effective learning of spatial-temporal information within a point cloud sequence is highly important for many down-stream tasks such as 4D semantic segmentation and 3D action recognition. In this paper, we propose a novel framework named Point Spatial-Temporal Transformer (PST2) to learn spatial-temporal representations from dynamic 3D point cloud sequences. Our PST2 consists of two major modules: a Spatio-Temporal Self-Attention (STSA) module and a Resolution Embedding (RE) module. Our STSA module is introduced to capture the spatial-temporal context information across adjacent frames, while the RE module is proposed to aggregate features across neighbors to enhance the resolution of feature maps. We test the effectiveness our PST2 with two different tasks on point cloud sequences, i.e., 4D semantic segmentation and 3D action recognition. Extensive experiments on three benchmarks show that our PST2 outperforms existing methods on all datasets. The effectiveness of our STSA and RE modules have also been justified with ablation experiments.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Wei_2022_WACV, author = {Wei, Yimin and Liu, Hao and Xie, Tingting and Ke, Qiuhong and Guo, Yulan}, title = {Spatial-Temporal Transformer for 3D Point Cloud Sequences}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1171-1180} }