Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes

Yiran Zhong, Pan Ji, Jianyuan Wang, Yuchao Dai, Hongdong Li; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12095-12104

Abstract


Unsupervised deep learning for optical flow computation has achieved promising results. Most existing deep-net based methods rely on image brightness consistency and local smoothness constraint to train the networks. Their performance degrades at regions where repetitive textures or occlusions occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical flow method which incorporates global geometric constraints into network learning. In particular, we investigate multiple ways of enforcing the epipolar constraint in flow estimation. To alleviate a "chicken-and-egg" type of problem encountered in dynamic scenes where multiple motions may be present, we propose a low-rank constraint as well as a union-of-subspaces constraint for training. Experimental results on various benchmarking datasets show that our method achieves competitive performance compared with supervised methods and outperforms state-of-the-art unsupervised deep-learning methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhong_2019_CVPR,
author = {Zhong, Yiran and Ji, Pan and Wang, Jianyuan and Dai, Yuchao and Li, Hongdong},
title = {Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}