PhoneDepth: A Dataset for Monocular Depth Estimation on Mobile Devices

Fausto Tapia Benavides, Andrey Ignatov, Radu Timofte; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 3049-3056

Abstract


Monocular depth estimation has been studied as a classic and learning based computer vision problem for decades. However, little attention received the efficiency and the deployment of methods on mobile hardware. All publicly available datasets have severe limitations related to their applicability to camera data captured with real mobile devices. For instance, the main issues with current datasets include (but not exhaustively) low quality of images due the cameras or collection methods, domain specifically generated datasets as is the case for autonomous driving, small number of samples, sparse depthmaps, etc. In response, we introduce PhoneDepth, a novel dataset that aims to take advantage of modern phones hardware and professional stereo cameras. Depthmaps are acquired from three sources: Time of Flight sensor, Dual Pixel sensor and stereo camera; while the images correspond to mobile phone photos. We prove its high value by training neural networks with multiple depth supervision, fine-tuning on other datasets and for depth refinement. Along with the dataset we present benchmark models and a toolbox to facilitate the dataset usage.

Related Material


[pdf]
[bibtex]
@InProceedings{Benavides_2022_CVPR, author = {Benavides, Fausto Tapia and Ignatov, Andrey and Timofte, Radu}, title = {PhoneDepth: A Dataset for Monocular Depth Estimation on Mobile Devices}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {3049-3056} }