Monocular Piecewise Depth Estimation in Dynamic Scenes by Exploiting Superpixel Relations

Yan Di, Henrique Morimitsu, Shan Gao, Xiangyang Ji; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4363-4372

Abstract


In this paper, we propose a novel and specially designed method for piecewise dense monocular depth estimation in dynamic scenes. We utilize spatial relations between neighboring superpixels to solve the inherent relative scale ambiguity (RSA) problem and smooth the depth map. However, directly estimating spatial relations is an ill-posed problem. Our core idea is to predict spatial relations based on the corresponding motion relations. Given two or more consecutive frames, we first compute semi-dense (CPM) or dense (optical flow) point matches between temporally neighboring images. Then we develop our method in four main stages: superpixel relations analysis, motion selection, reconstruction, and refinement. The final refinement process helps to improve the quality of the reconstruction at pixel level. Our method does not require per-object segmentation, template priors or training sets, which ensures flexibility in various applications. Extensive experiments on both synthetic and real datasets demonstrate that our method robustly handles different dynamic situations and presents competitive results to the state-of-the-art methods while running much faster than them.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Di_2019_ICCV,
author = {Di, Yan and Morimitsu, Henrique and Gao, Shan and Ji, Xiangyang},
title = {Monocular Piecewise Depth Estimation in Dynamic Scenes by Exploiting Superpixel Relations},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}