Noise-Aware Unsupervised Deep Lidar-Stereo Fusion

Xuelian Cheng, Yiran Zhong, Yuchao Dai, Pan Ji, Hongdong Li; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6339-6348

Abstract


In this paper, we present LidarStereoNet, the first unsupervised Lidar-stereo fusion network, which can be trained in an end-to-end manner without the need of ground truth depth maps. By introducing a novel "Feedback Loop" to connect the network input with output, LidarStereoNet could tackle both noisy Lidar points and misalignment between sensors that have been ignored in existing Lidar-stereo fusion work. Besides, we propose to incorporate the piecewise planar model into the network learning to further constrain depths to conform to the underlying 3D geometry. Extensive quantitative and qualitative evaluations on both real and synthetic datasets demonstrate the superiority of our method, which outperforms state-of-the-art stereo matching, depth completion and Lidar-Stereo fusion approaches significantly.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Cheng_2019_CVPR,
author = {Cheng, Xuelian and Zhong, Yiran and Dai, Yuchao and Ji, Pan and Li, Hongdong},
title = {Noise-Aware Unsupervised Deep Lidar-Stereo Fusion},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}