PU-Transformer: Point Cloud Upsampling Transformer

Shi Qiu, Saeed Anwar, Nick Barnes; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 2475-2493

Abstract


Given the rapid development of 3D scanners, point clouds are becoming popular in AI-driven machines. However, point cloud data is inherently sparse and irregular, causing major difficulties for machine perception. In this work, we focus on the point cloud upsampling task that intends to generate dense high-fidelity point clouds from sparse input data. Specifically, to activate the transformer's strong capability in representing features, we develop a new variant of a multi-head self-attention structure to enhance both point-wise and channel-wise relations of the feature map. In addition, we leverage a positional fusion block to comprehensively capture the local context of point cloud data, providing more position-related information about the scattered points. As the first transformer model introduced for point cloud upsampling, we demonstrate the outstanding performance of our approach by comparing with the state-of-the-art CNN-based methods on different benchmarks quantitatively and qualitatively.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Qiu_2022_ACCV, author = {Qiu, Shi and Anwar, Saeed and Barnes, Nick}, title = {PU-Transformer: Point Cloud Upsampling Transformer}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {2475-2493} }