-
[pdf]
[supp]
[bibtex]@InProceedings{Burgdorfer_2023_ICCV, author = {Burgdorfer, Nathaniel and Mordohai, Philippos}, title = {V-FUSE: Volumetric Depth Map Fusion with Long-Range Constraints}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {3449-3458} }
V-FUSE: Volumetric Depth Map Fusion with Long-Range Constraints
Abstract
We introduce a learning-based depth map fusion framework that accepts a set of depth and confidence maps generated by a Multi-View Stereo (MVS) algorithm as input and improves them. This is accomplished by integrating volumetric visibility constraints that encode long-range surface relationships across different views into an end-to-end trainable architecture. We also introduce a depth search window estimation sub-network trained jointly with the larger fusion sub-network to reduce the depth hypothesis search space along each ray. Our method learns to model depth consensus and violations of visibility constraints directly from the data; effectively removing the necessity of fine-tuning fusion parameters. Extensive experiments on MVS datasets show substantial improvements in the accuracy of the output fused depth and confidence maps.
Related Material