Depth Estimation via Sparse Radar Prior and Driving Scene Semantics

Ke Zheng, Shuguang Li, Kongjian Qin, Zhenxu LI, Yang Zhao, Zhinan Peng, Hong Cheng; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 911-927

Abstract


Depth estimation is an essential module for the perception system of autonomous driving. The state-of-the-art methods introduce LiDAR to improve the performance of monocular depth estimation, but it faces the challenges of weather durability and high hardware cost. Unlike existing LiDAR and image-based methods, a two-stage network is proposed to integrate highly sparse radar data in this paper, in which sparse pre-mapping module and feature fusion module are proposed for radar feature extraction and feature fusion respectively. Considering the highly structured driving scenario, we introduce semantic information of the scenario to further improve the loss function, thus making the network more focused on the target region. Finally, we propose a novel depth dataset construction strategy by integrating binary mask-based filtering and interpolation methods based on the nuScenes dataset. And the effectiveness of our proposed method has been demonstrated through extensive experiments, which outperform existing methods in all metrics.

Related Material


[pdf]
[bibtex]
@InProceedings{Zheng_2022_ACCV, author = {Zheng, Ke and Li, Shuguang and Qin, Kongjian and LI, Zhenxu and Zhao, Yang and Peng, Zhinan and Cheng, Hong}, title = {Depth Estimation via Sparse Radar Prior and Driving Scene Semantics}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {911-927} }