Disentangling Local and Global Information for Light Field Depth Estimation

Xueting Yang, Junli Deng, Rongshan Chen, Ruixuan Cong, Wei Ke, Hao Sheng; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 3419-3427

Abstract


Accurate depth estimation from light field images is essential for various applications. Deep learning-based techniques have shown great potential in addressing this problem while still face challenges such as sensitivity to occlusions and difficulties in handling untextured areas. To overcome these limitations, we propose a novel approach that utilizes both local and global features in the cost volume for depth estimation. Specifically, our hybrid cost volume network consists of two complementary sub-modules: a 2D ContextNet for global context information and a matching cost volume for local feature information. We also introduce an occlusion-aware loss that accounts for occlusion areas to improve depth estimation quality. We demonstrate the effectiveness of our approach on the UrbanLF and HCInew datasets, showing significant improvements over existing methods, especially in occluded and untextured regions. Our method disentangles local feature and global semantic information explicitly, reducing the occlusion and untextured area reconstruction error and improving the accuracy of depth estimation.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2023_CVPR, author = {Yang, Xueting and Deng, Junli and Chen, Rongshan and Cong, Ruixuan and Ke, Wei and Sheng, Hao}, title = {Disentangling Local and Global Information for Light Field Depth Estimation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {3419-3427} }