Photon-Flooded Single-Photon 3D Cameras

Anant Gupta, Atul Ingle, Andreas Velten, Mohit Gupta; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6770-6779

Abstract


Single-photon avalanche diodes (SPADs) are starting to play a pivotal role in the development of photon-efficient, long-range LiDAR systems. However, due to non-linearities in their image formation model, a high photon flux (e.g., due to strong sunlight) leads to distortion of the incident temporal waveform, and potentially, large depth errors. Operating SPADs in low flux regimes can mitigate these distortions, but, often requires attenuating the signal and thus, results in low signal-to-noise ratio. In this paper, we address the following basic question: what is the optimal photon flux that a SPAD-based LiDAR should be operated in? We derive a closed form expression for the optimal flux, which is quasi-depth-invariant, and depends on the ambient light strength. The optimal flux is lower than what a SPAD typically measures in real world scenarios, but surprisingly, considerably higher than what is conventionally suggested for avoiding distortions. We propose a simple, adaptive approach for achieving the optimal flux by attenuating incident flux based on an estimate of ambient light strength. Using extensive simulations and a hardware prototype, we show that the optimal flux criterion holds for several depth estimators, under a wide range of illumination conditions.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Gupta_2019_CVPR,
author = {Gupta, Anant and Ingle, Atul and Velten, Andreas and Gupta, Mohit},
title = {Photon-Flooded Single-Photon 3D Cameras},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}