Learning Depth From Monocular Videos Using Direct Methods

Chaoyang Wang, José Miguel Buenaposada, Rui Zhu, Simon Lucey; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2022-2030

Abstract


The ability to predict depth from a single image - using recent advances in CNNs - is of increasing interest to the vision community. Unsupervised strategies to learning are particularly appealing as they can utilize much larger and varied monocular video datasets during learning without the need for ground truth depth or stereo. In previous works, separate pose and depth CNN predictors had to be determined such that their joint outputs minimized the photometric error. Inspired by recent advances in direct visual odometry (DVO), we argue that the depth CNN predictor can be learned without a pose CNN predictor. Further, we demonstrate empirically that incorporation of a differentiable implementation of DVO, along with a novel depth normalization strategy - substantially improves performance over state of the art that use monocular videos for training.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Wang_2018_CVPR,
author = {Wang, Chaoyang and Buenaposada, José Miguel and Zhu, Rui and Lucey, Simon},
title = {Learning Depth From Monocular Videos Using Direct Methods},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}