S3: Learnable Sparse Signal Superdensity for Guided Depth Estimation

Yu-Kai Huang, Yueh-Cheng Liu, Tsung-Han Wu, Hung-Ting Su, Yu-Cheng Chang, Tsung-Lin Tsou, Yu-An Wang, Winston H. Hsu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 16706-16716

Abstract


Dense depth estimation plays a key role in multiple applications such as robotics, 3D reconstruction, and augmented reality. While sparse signal, e.g., LiDAR and Radar, has been leveraged as guidance for enhancing dense depth estimation, the improvement is limited due to its low density and imbalanced distribution. To maximize the utility from the sparse source, we propose Sparse Signal Superdensity (S3) technique, which expands the depth value from sparse cues while estimating the confidence of expanded region. The proposed S3 can be applied to various guided depth estimation approaches and trained end-to-end at different stages, including input, cost volume and output. Extensive experiments demonstrate the effectiveness, robustness, and flexibility of the S3 technique on LiDAR and Radar signal.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Huang_2021_CVPR, author = {Huang, Yu-Kai and Liu, Yueh-Cheng and Wu, Tsung-Han and Su, Hung-Ting and Chang, Yu-Cheng and Tsou, Tsung-Lin and Wang, Yu-An and Hsu, Winston H.}, title = {S3: Learnable Sparse Signal Superdensity for Guided Depth Estimation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {16706-16716} }