Depth Completion Auto-Encoder

Kaiyue Lu, Nick Barnes, Saeed Anwar, Liang Zheng; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, 2022, pp. 63-73


This paper proposes a new usage of integrating RGB image features for unsupervised depth completion. Instead of resorting to the image as input like existing works, we propose to employ the image to guide the learning process. Specifically, we regard dense depth as a reconstructed result of the sparse input, and formulate our model as an auto-encoder. To reduce structure inconsistency resulting from sparse depth, we employ the image to guide latent features by penalizing their difference in the training process. The image guidance loss enables our model to acquire more dense and structural cues that are beneficial to producing more accurate and consistent depth values. For inference, our model only takes sparse depth as input and no image is required. Our paradigm is new and pushes unsupervised depth completion further than existing works that require the image at test time. On the KITTI Depth Completion Benchmark, we validate its effectiveness through extensive experiments and achieve promising performance compared with other unsupervised works. The proposed method is also applicable to indoor scenes such as NYUv2.

Related Material

@InProceedings{Lu_2022_WACV, author = {Lu, Kaiyue and Barnes, Nick and Anwar, Saeed and Zheng, Liang}, title = {Depth Completion Auto-Encoder}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2022}, pages = {63-73} }