PointFormer: A Dual Perception Attention-based Network for Point Cloud Classification

Yijun Chen, Zhulun Yang, Xianwei Zheng, Yadong Chang, Xutao Li; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3291-3307

Abstract


Point cloud classification is a fundamental but still challenging task in 3-D computer vision. The main issue is that learning representational features from initial point cloud objects is always difficult for existing models. Inspired by the Transformer, which has achieved successful performance in the field of natural language processing, we propose a purely attention-based network, named PointFormer, for point cloud classification. Specifically, we design a novel simple point multiplicative attention mechanism. Based on that, we then construct both a local attention block and a global attention block to learn fine geometric features and overall representational features of the point cloud, respectively. Consequently, compared to the existing approaches, PointFormer has superior perception of local details and overall contours of the point cloud objects. In addition, we innovatively propose the Graph-Multiscale Perceptual Field (GMPF) testing strategy that can significantly improve the overall performance of the proposed PointFormer. We have conducted extensive experiments on the real-world dataset ScanObjectNN and the synthetic dataset ModelNet40. The results show that the PointFormer has stronger robustness and achieves highly competitive performance compared to other state-of-the-art approaches.

Related Material


[pdf] [code]
[bibtex]
@InProceedings{Chen_2022_ACCV, author = {Chen, Yijun and Yang, Zhulun and Zheng, Xianwei and Chang, Yadong and Li, Xutao}, title = {PointFormer: A Dual Perception Attention-based Network for Point Cloud Classification}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3291-3307} }