RayMVSNet: Learning Ray-Based 1D Implicit Fields for Accurate Multi-View Stereo

Junhua Xi, Yifei Shi, Yijie Wang, Yulan Guo, Kai Xu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8595-8605

Abstract


Learning-based multi-view stereo (MVS) has by far centered around 3D convolution on cost volumes. Due to the high computation and memory consumption of 3D CNN, the resolution of output depth is often considerably limited. Different from most existing works dedicated to adaptive refinement of cost volumes, we opt to directly optimize the depth value along each camera ray, mimicking the range (depth) finding of a laser scanner. This reduces the MVS problem to ray-based depth optimization which is much more light-weight than full cost volume optimization. In particular, we propose RayMVSNet which learns sequential prediction of a 1D implicit field along each camera ray with the zero-crossing point indicating scene depth. This sequential modeling, conducted based on transformer features, essentially learns the epipolar line search in traditional multi-view stereo. We also devise a multi-task learning for better optimization convergence and depth accuracy. Our method ranks top on both the DTU and the Tanks & Temples datasets over all previous learning-based methods, achieving overall reconstruction score of 0.33mm on DTU and f-score of 59.48% on Tanks & Temples.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Xi_2022_CVPR, author = {Xi, Junhua and Shi, Yifei and Wang, Yijie and Guo, Yulan and Xu, Kai}, title = {RayMVSNet: Learning Ray-Based 1D Implicit Fields for Accurate Multi-View Stereo}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {8595-8605} }