A Decomposition Model for Stereo Matching

Chengtang Yao, Yunde Jia, Huijun Di, Pengxiang Li, Yuwei Wu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 6091-6100

Abstract


In this paper, we present a decomposition model for stereo matching to solve the problem of excessive growth in computational cost (time and memory cost) as the resolution increases. In order to reduce the huge cost of stereo matching at the original resolution, our model only runs dense matching at a very low resolution and uses sparse matching at different higher resolutions to recover the disparity of lost details scale-by-scale. After the decomposition of stereo matching, our model iteratively fuses the sparse and dense disparity maps from adjacent scales with an occlusion-aware mask. A refinement network is also applied to improving the fusion result. Compared with high-performance methods like PSMNet and GANet, our method achieves 10-100x speed increase while obtaining comparable disparity estimation results.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yao_2021_CVPR, author = {Yao, Chengtang and Jia, Yunde and Di, Huijun and Li, Pengxiang and Wu, Yuwei}, title = {A Decomposition Model for Stereo Matching}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {6091-6100} }