A Deep Visual Correspondence Embedding Model for Stereo Matching Costs

Zhuoyuan Chen, Xun Sun, Liang Wang, Yinan Yu, Chang Huang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 972-980

Abstract


This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2015_ICCV,
author = {Chen, Zhuoyuan and Sun, Xun and Wang, Liang and Yu, Yinan and Huang, Chang},
title = {A Deep Visual Correspondence Embedding Model for Stereo Matching Costs},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}