Live Demonstration: ANN vs SNN vs Hybrid Architectures for Event-Based Real-Time Gesture Recognition and Optical Flow Estimation

Adarsh Kumar Kosta, Marco Paul E. Apolinario, Kaushik Roy; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 4148-4149

Abstract


Spiking Neural Networks (SNNs) have recently emerged as a promising solution to handle asynchronous data from event-based cameras. Their inherent recurrence allows temporal information in events to be effectively captured unlike widely used non-spiking artificial neural networks (so-called ANNs). However, SNNs are not suitable to run on GPUs and still require specialized neuromorphic hardware to process events efficiently. Hybrid SNN-ANN architectures aim to obtain the best of both worlds with initial SNN layers capturing input temporal information followed by standard ANN layers for ease of training and deployment on GPUs. In this work, we implement ANN, SNN, and hybrid architectures for real-time gesture recognition and optical flow estimation on standard GPUs. We compare different architectures in terms of prediction accuracy, number of parameters, latency, and computational power when executing them in real time on a standard laptop. Our implementation suggests that the hybrid architecture offers the best trade-off in terms of accuracy, compute efficiency, and latency on a readily available GPU platform.

Related Material


[pdf]
[bibtex]
@InProceedings{Kosta_2023_CVPR, author = {Kosta, Adarsh Kumar and Apolinario, Marco Paul E. and Roy, Kaushik}, title = {Live Demonstration: ANN vs SNN vs Hybrid Architectures for Event-Based Real-Time Gesture Recognition and Optical Flow Estimation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4148-4149} }