Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss

Jianbo Jiao, Ying Cao, Yibing Song, Rynson Lau; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 53-69

Abstract


Monocular depth estimation benefits greatly from learning based techniques. By studying the training data, we observe that the per-pixel depth values in existing datasets typically exhibit a long-tailed distribution. However, most previous approaches treat all the regions in the training data equally regardless of the imbalanced depth distribution, which restricts the model performance particularly on distant depth regions. In this paper, we investigate the long tail property and delve deeper into the distant depth regions (i.e. the tail part) to propose an attention-driven loss for the network supervision. In addition, to better leverage the semantic information for monocular depth estimation, we propose a synergy network to automatically learn the information sharing strategies between the two tasks. With the proposed attention-driven loss and synergy network, the depth estimation and semantic labeling tasks can be mutually improved. Experiments on the challenging indoor dataset show that the proposed approach achieves state-of-the-art performance on both monocular depth estimation and semantic labeling tasks.

Related Material


[pdf]
[bibtex]
@InProceedings{Jiao_2018_ECCV,
author = {Jiao, Jianbo and Cao, Ying and Song, Yibing and Lau, Rynson},
title = {Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}