Image Reconstruction From Neuromorphic Event Cameras Using Laplacian-Prediction and Poisson Integration With Spiking and Artificial Neural Networks
Event cameras are robust neuromorphic visual sensors, which communicate transients in luminance as events. Current paradigms for image reconstruction from events mostly rely on direct optimization of artificial Convolutional Neural Networks (CNNs). Here we propose a two-phase neural network, which comprises a CNN, optimized for Laplacian prediction, and a Spiking Neural Network (SNN) optimized for Poisson integration. By introducing Laplacian prediction into the pipeline, we provide image reconstruction with a network comprising only 200 parameters. We converted the CNN to SNN, providing a full neuromorphic implementation. We further optimized the network with Mish activation and a novel convoluted CNN design, proposing a hybrid of spiking and artificial neural network with < 100 parameters. Models were evaluated on both N-MNIST and N-Caltech101 datasets.