Depth Coefficients for Depth Completion
Saif Imran, Yunfei Long, Xiaoming Liu, Daniel Morris; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12446-12455
Abstract
Depth completion involves estimating a dense depth image from sparse depth measurements, often guided by a color image. While linear upsampling is straight forward, it results in depth pixels being interpolated in empty space across discontinuities between objects. Current methods use deep networks to maintain gaps between objects. Nevertheless depth smearing remains a challenge. We propose a new representation for depth called Depth Coefficients (DC) to address this problem. It enables convolutions to more easily avoid inter-object depth mixing. We also show that the standard Mean Squared Error (MSE) loss function can promote depth mixing, and so we propose instead to use cross-entropy loss for DC. Both quantitative and qualitative evaluation are conducted on benchmarks, and we show that switching out sparse depth input and MSE loss functions with our DC representation and loss is a simple way to improve performance, reduce pixel depth mixing and can improve object detection.
Related Material
[pdf]
[
bibtex]
@InProceedings{Imran_2019_CVPR,
author = {Imran, Saif and Long, Yunfei and Liu, Xiaoming and Morris, Daniel},
title = {Depth Coefficients for Depth Completion},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}