-
[pdf]
[supp]
[bibtex]@InProceedings{Peng_2024_CVPR, author = {Peng, Guohao and Li, Heshan and Zhao, Yangyang and Zhang, Jun and Wu, Zhenyu and Zheng, Pengyu and Wang, Danwei}, title = {TransLoc4D: Transformer-based 4D Radar Place Recognition}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {17595-17605} }
TransLoc4D: Transformer-based 4D Radar Place Recognition
Abstract
Place Recognition is crucial for unmanned vehicles in terms of localization and mapping. Recent years have witnessed numerous explorations in the field where 2D cameras and 3D LiDARs are mostly employed. Despite their admirable performance they may encounter challenges in adverse weather such as rain and fog. Hopefully 4D millimeter-wave Radar emerges as a promising alternative as its longer wavelength makes it virtually immune to interference from tiny particles of fog and rain. Therefore in this work we propose a novel 4D Radar place recognition model TransLoc4D based on sparse convolution and Transformer structures. Specifically a Minkloc4D backbone is first proposed to leverage the geometric intensity and velocity information from 4D Radar scans. While mainstream 3D LiDAR solutions merely capture geometric structures of point clouds Minkloc4D explores the intensity and velocity properties of 4D Radar scans and demonstrates their effectiveness. After feature extraction a Transformer layer is introduced to enhance local features where linear self-attention captures the long-range dependency of point cloud alleviating its sparsity and noise. To validate TransLoc4D we construct two datasets and set up benchmarks for 4D radar place recognition. Experiments show TransLoc4D is feasible and can robustly deal with dynamic and adverse environments.
Related Material