-
[pdf]
[arXiv]
[bibtex]@InProceedings{Zhang_2021_CVPR, author = {Zhang, Songyan and Wang, Zhicheng and Wang, Qiang and Zhang, Jinshuo and Wei, Gang and Chu, Xiaowen}, title = {EDNet: Efficient Disparity Estimation With Cost Volume Combination and Attention-Based Spatial Residual}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {5433-5442} }
EDNet: Efficient Disparity Estimation With Cost Volume Combination and Attention-Based Spatial Residual
Abstract
Existing state-of-the-art disparity estimation works mostly leverage the 4D concatenation volume and construct a very deep 3D convolution neural network (CNN) for disparity regression, which is inefficient due to the high memory consumption and slow inference speed. In this paper, we propose a network named EDNet for efficient disparity estimation. Firstly, we construct a combined volume which incorporates contextual information from the squeezed concatenation volume and feature similarity measurement from the correlation volume. The combined volume can be next aggregated by 2D convolutions which are faster and require less memory than 3D convolutions. Secondly, we propose an attention-based spatial residual module to generate attention-aware residual features. The attention mechanism is applied to provide intuitive spatial evidence about inaccurate regions with the help of error maps at multiple scales and thus improve the residual learning efficiency. Extensive experiments on the Scene Flow and KITTI datasets show that EDNet outperforms the previous 3D CNN based works and achieves state-of-the-art performance with significantly faster speed and less memory consumption.
Related Material