MVS2D: Efficient Multi-View Stereo via Attention-Driven 2D Convolutions

Zhenpei Yang, Zhile Ren, Qi Shan, Qixing Huang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8574-8584

Abstract


Deep learning has made significant impacts on multi-view stereo systems. State-of-the-art approaches typically involve building a cost volume, followed by multiple 3D convolution operations to recover the input image's pixel-wise depth. While such end-to-end learning of plane-sweeping stereo advances public benchmarks' accuracy, they are typically very slow to compute. We present MVS2D, a highly efficient multi-view stereo algorithm that seamlessly integrates multi-view constraints into single-view networks via an attention mechanism. Since MVS2Donly builds on 2D convolutions, it is at least 2x faster than all the notable counterparts. Moreover, our algorithm produces precise depth estimations and 3D reconstructions, achieving state-of-the-art results on challenging benchmarks ScanNet, SUN3D, RGBD, and the classical DTU dataset. our algorithm also out-performs all other algorithms in the setting of inexact camera poses. Our code is released at https://github.com/zhenpeiyang/MVS2D

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yang_2022_CVPR, author = {Yang, Zhenpei and Ren, Zhile and Shan, Qi and Huang, Qixing}, title = {MVS2D: Efficient Multi-View Stereo via Attention-Driven 2D Convolutions}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {8574-8584} }