-
[pdf]
[arXiv]
[bibtex]@InProceedings{Spencer_2024_CVPR, author = {Spencer, Jaime and Tosi, Fabio and Poggi, Matteo and Arora, Ripudaman Singh and Russell, Chris and Hadfield, Simon and Bowden, Richard and Zhou, Guangyuan and Li, Zhengxin and Rao, Qiang and Bao, Yiping and Liu, Xiao and Kim, Dohyeong and Kim, Jinseong and Kim, Myunghyun and Lavreniuk, Mykola and Li, Rui and Mao, Qing and Wu, Jiang and Zhu, Yu and Sun, Jinqiu and Zhang, Yanning and Patni, Suraj and Agarwal, Aradhye and Arora, Chetan and Sun, Pihai and Jiang, Kui and Wu, Gang and Liu, Jian and Liu, Xianming and Jiang, Junjun and Zhang, Xidan and Wei, Jianing and Wang, Fangjun and Tan, Zhiming and Wang, Jiabao and Luginov, Albert and Shahzad, Muhammad and Hosseini, Seyed and Trajcevski, Aleksander and Elder, James H.}, title = {The Third Monocular Depth Estimation Challenge}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1-14} }
The Third Monocular Depth Estimation Challenge
Abstract
This paper discusses the results of the third edition of the Monocular Depth Estimation Challenge (MDEC). The challenge focuses on zero-shot generalization to the challenging SYNS-Patches dataset featuring complex scenes in natural and indoor settings. As with the previous edition methods can use any form of supervision i.e. supervised or self-supervised. The challenge received a total of 19 submissions outperforming the baseline on the test set: 10 among them submitting a report describing their approach highlighting a diffused use of foundational models such as Depth Anything at the core of their method. The challenge winners drastically improved 3D F-Score performance from 17.51% to 23.72%.
Related Material