EAI-Stereo: Error Aware Iterative Network for Stereo Matching

Haoliang Zhao, Huizhou Zhou, Yongjun Zhang, Yong Zhao, Yitong Yang, Ting Ouyang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 315-332

Abstract


Current state-of-the-art stereo algorithms use a 2D CNN to extract features and then form a cost volume, which is fed into the following cost aggregation and regularization module composed of 2D or 3D CNNs. However, a large amount of high-frequency information like texture, color variation, sharp edge etc. is not well exploited during this process, which leads to relatively blurry and lacking detailed disparity maps. In this paper, we aim at making full use of the high-frequency information from the original image. Towards this end, we propose an error-aware refinement module that incorporates high-frequency information from the original left image and allows the network to learn error correction capabilities that can produce excellent subtle details and sharp edges. In order to improve the data transfer efficiency between our iterations, we propose the Iterative Multiscale Wide-LSTM Network which could carry more semantic information across iterations. We demonstrate the efficiency and effectiveness of our method on KITTI 2015, Middlebury, and ETH3D. At the time of writing this paper, EAI-Stereo ranks 1st on the Middlebury leaderboard and 1st on the ETH3D Stereo benchmark for 50% quantile metric and second for 0.5px error rate among all published methods. Our model performs well in cross-domain scenarios and outperforms current methods specifically designed for generalization.

Related Material


[pdf] [code]
[bibtex]
@InProceedings{Zhao_2022_ACCV, author = {Zhao, Haoliang and Zhou, Huizhou and Zhang, Yongjun and Zhao, Yong and Yang, Yitong and Ouyang, Ting}, title = {EAI-Stereo: Error Aware Iterative Network for Stereo Matching}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {315-332} }