-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Fan_2024_CVPR, author = {Fan, Yimeng and Zhang, Wei and Liu, Changsong and Li, Mingyang and Lu, Wenrui}, title = {SFOD: Spiking Fusion Object Detector}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {17191-17200} }
SFOD: Spiking Fusion Object Detector
Abstract
Event cameras characterized by high temporal resolution high dynamic range low power consumption and high pixel bandwidth offer unique capabilities for object detection in specialized contexts. Despite these advantages the inherent sparsity and asynchrony of event data pose challenges to existing object detection algorithms. Spiking Neural Networks (SNNs) inspired by the way the human brain codes and processes information offer a potential solution to these difficulties. However their performance in object detection using event cameras is limited in current implementations. In this paper we propose the Spiking Fusion Object Detector (SFOD) a simple and efficient approach to SNN-based object detection. Specifically we design a Spiking Fusion Module achieving the first-time fusion of feature maps from different scales in SNNs applied to event cameras. Additionally through integrating our analysis and experiments conducted during the pretraining of the backbone network on the NCAR dataset we delve deeply into the impact of spiking decoding strategies and loss functions on model performance. Thereby we establish state-of-the-art classification results based on SNNs achieving 93.7% accuracy on the NCAR dataset. Experimental results on the GEN1 detection dataset demonstrate that the SFOD achieves a state-of-the-art mAP of 32.1% outperforming existing SNN-based approaches. Our research not only underscores the potential of SNNs in object detection with event cameras but also propels the advancement of SNNs. Code is available at https://github.com/yimeng-fan/SFOD.
Related Material