Unsupervised Learning of Depth and Ego-Motion From Video

Tinghui Zhou, Matthew Brown, Noah Snavely, David G. Lowe; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1851-1858

Abstract


We present an unsupervised learning framework for the task of dense 3D geometry and camera motion estimation from unstructured video sequences. In common with recent work, we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to these works, our method is completely unsupervised, requiring only a sequence of images as input. We achieve this with a network that estimates the 6-DoF camera pose parameters of the input set, along with dense depth for a reference view using single-view inference. Our loss is constructed by projecting the nearby posed views into the reference view via the depth map. Results using the KITTI dataset demonstrate the effectiveness of our approach, which performs on par with another deep learning approach that assumes ground-truth pose information at training time.

Related Material


[pdf] [arXiv] [video]
[bibtex]
@InProceedings{Zhou_2017_CVPR,
author = {Zhou, Tinghui and Brown, Matthew and Snavely, Noah and Lowe, David G.},
title = {Unsupervised Learning of Depth and Ego-Motion From Video},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}