Low-Latency Monocular Depth Estimation Using Event Timing on Neuromorphic Hardware

Stefano Chiavazza, Svea Marie Meyer, Yulia Sandamirskaya; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 4071-4080

Abstract


Depth estimation is an important task in many robotic applications. It is necessary to understand and navigate an environment and to interact with objects. Different active and passive sensing solutions can be used for depth estimation, with different tradeoffs in accuracy, range, latency, dealing with challenging light conditions, power efficiency and price. Event-based dynamic vision sensors (DVS) are particularly well-suited for situations in which low latency and low power vision are needed, e.g. on a fast mobile robot. In this work, we present an event-based depth estimation method with a single DVS using a novel depth from motion algorithm targeting neuromorphic hardware. The system first computes the optical flow on the neuromorphic chip and then computes the depth by combining optical flow with the camera velocity. The method assumes only translational motion and it successfully reconstructs the depth from the measured flow. The method can achieve low-latency depth estimation (<0.5ms) while maintaining a small network size, allowing for better scalability. We tested the algorithm on Intel's neuromorphic research chip Loihi 2.

Related Material


[pdf]
[bibtex]
@InProceedings{Chiavazza_2023_CVPR, author = {Chiavazza, Stefano and Meyer, Svea Marie and Sandamirskaya, Yulia}, title = {Low-Latency Monocular Depth Estimation Using Event Timing on Neuromorphic Hardware}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4071-4080} }