Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach

Ewa Piatkowska, Ahmed Nabil Belbachir, Margrit Gelautz; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 45-50

Abstract


This paper presents an adaptive cooperative approach towards the 3D reconstruction tailored for a bio-inspired depth camera: the stereo dynamic vision sensor (DVS). DVS consists of self-spiking pixels that asynchronously generate events upon relative light intensity changes. These sensors have the advantage to allow simultaneously high temporal resolution (better than 10us) and wide dynamic range (>120dB) at sparse data representation, which is not possible with frame-based cameras. In order to exploit the potential of DVS and benefit from its features, depth calculation should take into account the spatiotemporal and asynchronous aspect of data provided by the sensor. This work deals with developing an appropriate approach for the asynchronous, event-driven stereo algorithm. We propose a modification of the cooperative network [6] in which the history of the recent activity in the scene is stored to serve as spatiotemporal context used in disparity calculation for each incoming event. The network constantly evolves in time as events are generated. In our work, not only the spatiotemporal aspect of the data is preserved but also the matching is performed asynchronously. The results of the experiments prove that the proposed approach is well suited for DVS data and can be successfully used for our efficient passive depth camera.

Related Material


[pdf]
[bibtex]
@InProceedings{Piatkowska_2013_ICCV_Workshops,
author = {Ewa Piatkowska and Ahmed Nabil Belbachir and Margrit Gelautz},
title = {Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}