Enhancing Object Detection in Adverse Weather Conditions through Entropy and Guided Multimodal Fusion

Zhenrong Zhang, Haoyan Gong, Yuzheng Feng, Zixuan Chu, Hongbin Liu; Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 2922-2938

Abstract


Integrating diverse representations from complementary sensing modalities is essential for robust scene interpretation in autonomous driving. Deep learning architectures that fuse vision and range data have advanced 2D and 3D object detection in recent years. However, these modalities often suffer degradation in adverse weather or lighting conditions, leading to decreased performance. While domain adaptation methods have been developed to bridge the gap between source and target domains, they typically fall short because of the inherent discrepancy between the source and target domains. This discrepancy can manifest in different distributions of data and different feature spaces. This paper introduces a comprehensive domain-adaptive object detection framework. Developed through deep transfer learning, the framework is designed to robustly generalize from labelled clear-weather data to unlabeled adverse weather conditions, enhancing the performance of deep learning-based object detection models. The innovative Patch Entropy Fusion Module (PEFM) is central to our approach, which dynamically integrates sensor data, emphasizing critical information and minimizing background distractions. This is further complemented by a novel Weighted Decision Module (WDM) that adjusts the contributions of different sensors based on their efficacy under specific environmental conditions, thereby optimizing detection accuracy. Additionally, we integrate a domain align loss during the transfer learning process to ensure effective domain adaptation by regularizing the feature map discrepancies between clear and adverse weather datasets. We evaluate our model on diverse datasets, including ExDark (unimodal), Cityscapes (unimodal), and Dense (multimodal), where it ranks 1^ st in all datasets when we finished.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2024_ACCV, author = {Zhang, Zhenrong and Gong, Haoyan and Feng, Yuzheng and Chu, Zixuan and Liu, Hongbin}, title = {Enhancing Object Detection in Adverse Weather Conditions through Entropy and Guided Multimodal Fusion}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {2922-2938} }