Dynamic Cues-Assisted Transformer for Robust Point Cloud Registration

Hong Chen, Pei Yan, Sihe Xiang, Yihua Tan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 21698-21707

Abstract


Point Cloud Registration is a critical and challenging task in computer vision. Recent advancements have predominantly embraced a coarse-to-fine matching mechanism with the key to matching the superpoints located in patches with inter-frame consistent structures. However previous methods still face challenges with ambiguous matching because the interference information aggregated from irrelevant regions may disturb the capture of inter-frame consistency relations leading to wrong matches. To address this issue we propose Dynamic Cues-Assisted Transformer (DCATr). Firstly the interference from irrelevant regions is greatly reduced by constraining attention to certain cues i.e. regions with highly correlated structures of potential corresponding superpoints. Secondly cues-assisted attention is designed to mine the inter-frame consistency relations while more attention is assigned to pairs with high consistent confidence in feature aggregation. Finally a dynamic updating fashion is proposed to facilitate mining richer consistency information further improving aggregated features' distinctiveness and relieving matching ambiguity. Extensive evaluations on indoor and outdoor standard benchmarks demonstrate that DCATr outperforms all state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Hong and Yan, Pei and Xiang, Sihe and Tan, Yihua}, title = {Dynamic Cues-Assisted Transformer for Robust Point Cloud Registration}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {21698-21707} }