EPP-MVSNet: Epipolar-Assembling Based Depth Prediction for Multi-View Stereo

Xinjun Ma, Yue Gong, Qirui Wang, Jingwei Huang, Lei Chen, Fan Yu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 5732-5740

Abstract


In this paper, we proposed EPP-MVSNet, a novel deep learning network for 3D reconstruction from multi-view stereo (MVS). EPP-MVSNet can accurately aggregate features at high resolution to a limited cost volume with an optimal depth range, thus, leads to effective and efficient 3D construction. Distinct from existing works which measure feature cost at discrete positions which affects the 3D reconstruction accuracy, EPP-MVSNet introduces an epipolar assembling-based kernel that operates on adaptive intervals along epipolar lines for making full use of the image resolution. Further, we introduce an entropy-based refining strategy where the cost volume describes the space geometry with the little redundancy. Moreover, we design a light-weighted network with Pseudo-3D convolutions integrated to achieve high accuracy and efficiency. We have conducted extensive experiments on challenging datasets Tanks & Temples(TNT), ETH3D and DTU. As a result, we achieve promising results on all datasets and the highest F-Score on the online TNT intermediate benchmark. Code is available at https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/eppmvsnet.

Related Material


[pdf]
[bibtex]
@InProceedings{Ma_2021_ICCV, author = {Ma, Xinjun and Gong, Yue and Wang, Qirui and Huang, Jingwei and Chen, Lei and Yu, Fan}, title = {EPP-MVSNet: Epipolar-Assembling Based Depth Prediction for Multi-View Stereo}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {5732-5740} }