DeepDNet: Deep Dense Network for Depth Completion Task

Girish Hegde, Tushar Pharale, Soumya Jahagirdar, Vaishakh Nargund, Ramesh Ashok Tabib, Uma Mudenagudi, Basavaraja Vandrotti, Ankit Dhiman; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2190-2199

Abstract


In this paper, we propose a Deep Dense Network for Depth Completion Task (DeepDNet) towards generating dense depth map using sparse depth and captured view. Wide variety of scene understanding applications such as 3D reconstruction, mixed reality, robotics demand accurate and dense depth maps. Existing depth sensors capture accurate and reliable sparse depth and find challenges in acquiring dense depth maps. Towards this we plan to utilise the accurate sparse depth as input with RGB image to generate dense depth. We model the transformation of random sparse input to grid-based sparse input using Quad-tree decomposition. We propose Dense-Residual-Skip (DRS) Autoencoder along with an attention towards edge preservation using Gradient Aware Mean Squared Error (GAMSE) Loss. We demonstrate our results on the NYUv2 dataset and compare it with other state of the art methods. We also show our results on sparse depth captured by ARCore depth API with its dense depth map. Extensive experiments suggest consistent improvements over existing methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Hegde_2021_CVPR, author = {Hegde, Girish and Pharale, Tushar and Jahagirdar, Soumya and Nargund, Vaishakh and Tabib, Ramesh Ashok and Mudenagudi, Uma and Vandrotti, Basavaraja and Dhiman, Ankit}, title = {DeepDNet: Deep Dense Network for Depth Completion Task}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2190-2199} }