SGNet: Semantics Guided Deep Stereo Matching

Shuya Chen, Zhiyu Xiang, Chengyu Qiao, Yiman Chen, Tingming Bai; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Stereovision has been an intensive research area of computer vision. Based on deep learning, stereo matching networks are becoming popular in recent years. Despite of great progress, it's still challenging to achieve high accurate disparity map due to low texture and illumination changes in the scene. High-level semantic information can be helpful to handle these problems. In this paper a deep semantics guided stereo matching network (SGNet) is proposed. Apart from necessary semantic branch, three semantic guided modules are proposed to embed semantic constraints on matching. The joint confidence module produces confidence of cost volume based on the consistency of disparity and semantic features between left and right images. The residual module is responsible for optimizing the initial disparity results according to its semantic categories. Finally, in the loss module, the smooth of disparity is well supervised based on semantic boundary and region. The proposed network has been evaluated on various public datasets like KITTI 2015, KITTI 2012 and Virtual KITTI, and achieves the state-of-the-art performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2020_ACCV, author = {Chen, Shuya and Xiang, Zhiyu and Qiao, Chengyu and Chen, Yiman and Bai, Tingming}, title = {SGNet: Semantics Guided Deep Stereo Matching}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }