-
[pdf]
[bibtex]@InProceedings{Berjawi_2025_ICCV, author = {Berjawi, Jad and Dupas, Yoann and C\'erin, Christophe}, title = {Towards a Generalizable Fusion Architecture for Multimodal Object Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {2192-2200} }
Towards a Generalizable Fusion Architecture for Multimodal Object Detection
Abstract
Multimodal object detection improves robustness in challenging conditions by leveraging complementary cues from multiple sensor modalities. We introduce Filtered Multi-Modal Cross Attention Fusion (FMCAF), a preprocessing architecture designed to enhance the fusion of RGB and infrared (IR) inputs. FMCAF combines a frequency-domain filtering block (Freq-Filter) to suppress redundant spectral features with a cross-attention-based fusion module (MCAF) to improve intermodal feature sharing. Unlike approaches tailored to specific datasets, FMCAF aims for generalizability, improving performance across different multimodal challenges without requiring dataset-specific tuning. On LLVIP (low-light pedestrian detection) and VEDAI (aerial vehicle detection), FMCAF outperforms traditional fusion (concatenation), achieving +13.9% mAP@50 on VEDAI and +1.1% on LLVIP. These results support the potential of FMCAF as a flexible foundation for robust multimodal fusion in future detection pipelines.
Related Material
