Boosting Monocular Depth Estimation With Lightweight 3D Point Fusion

Lam Huynh, Phong Nguyen, Jiří Matas, Esa Rahtu, Janne Heikkilä; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12767-12776

Abstract


In this paper, we propose enhancing monocular depth estimation by adding 3D points as depth guidance. Unlike existing depth completion methods, our approach performs well on extremely sparse and unevenly distributed point clouds, which makes it agnostic to the source of the 3D points. We achieve this by introducing a novel multi-scale 3D point fusion network that is both lightweight and efficient. We demonstrate its versatility on two different depth estimation problems where the 3D points have been acquired with conventional structure-from-motion and LiDAR. In both cases, our network performs on par with state-of-the-art depth completion methods and achieves significantly higher accuracy when only a small number of points is used while being more compact in terms of the number of parameters. We show that our method outperforms some contemporary deep learning based multi-view stereo and structure-from-motion methods both in accuracy and in compactness.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Huynh_2021_ICCV, author = {Huynh, Lam and Nguyen, Phong and Matas, Ji\v{r}{\'\i} and Rahtu, Esa and Heikkil\"a, Janne}, title = {Boosting Monocular Depth Estimation With Lightweight 3D Point Fusion}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12767-12776} }