Deep Depth From Aberration Map

Masako Kashiwagi, Nao Mishima, Tatsuo Kozakaya, Shinsaku Hiura; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4070-4079

Abstract


Passive and convenient depth estimation from single-shot image is still an open problem. Existing depth from defocus methods require multiple input images or special hardware customization. Recent deep monocular depth estimation is also limited to an image with sufficient contextual information. In this work, we propose a novel method which realizes a single-shot deep depth measurement based on physical depth cue using only an off-the-shelf camera and lens. When a defocused image is taken by a camera, it contains various types of aberrations corresponding to distances from the image sensor and positions in the image plane. We call these minute and complexly compound aberrations as Aberration Map (A-Map) and we found that A-Map can be utilized as reliable physical depth cue. Additionally, our deep network named A-Map Analysis Network (AMA-Net) is also proposed, which can effectively learn and estimate depth via A-Map. To evaluate validity and robustness of our approach, we have conducted extensive experiments using both real outdoor scenes and simulated images. The qualitative result shows the accuracy and availability of the method in comparison with a state-of-the-art deep context-based method.

Related Material


[pdf]
[bibtex]
@InProceedings{Kashiwagi_2019_ICCV,
author = {Kashiwagi, Masako and Mishima, Nao and Kozakaya, Tatsuo and Hiura, Shinsaku},
title = {Deep Depth From Aberration Map},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}