-
[pdf]
[supp]
[bibtex]@InProceedings{Park_2025_ICCV, author = {Park, Incheol and Jin, Youngwan and Nalcakan, Yagiz and Ju, Hyeongjin and Yeo, Sanghyeop and Kim, Shiho}, title = {IVIFormer: Illumination-Aware Infrared-Visible Image Fusion via Adaptive Domain-Switching Cross Attention}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {2220-2229} }
IVIFormer: Illumination-Aware Infrared-Visible Image Fusion via Adaptive Domain-Switching Cross Attention
Abstract
A critical challenge in autonomous driving is maintaining robust perception across drastic illumination changes, from day to night. While infrared-visible image fusion (IVIF) offers a promising solution, most existing methods adopt a static, illumination-agnostic approach, failing to adapt to the changing importance of each sensor modality. In this paper, we introduce IVIFormer, a novel fusion network that explicitly tackles this limitation through a dynamic, condition-aware strategy. The core of our method is the Adaptive Domain-Switching Cross-Attention (ADS-CA), a mechanism that dynamically reverses the roles of query and key/value features between the infrared and visible domains. This switching is guided by a highly efficient Light Condition Decision Module (LCDM) that classifies the scene as day or night. This explicit, adaptive design allows IVIFormer to intelligently leverage the most informative sensor data for any given lighting condition. Extensive experiments on public long-wave infrared (LWIR) datasets and our newly collected short-wave infrared (SWIR) dataset demonstrate that IVIFormer achieves competitive performance. Both quantitative and qualitative results validate its superiority in preserving crucial details and enhancing scene visibility, especially under challenging lighting transitions. Code and pretrained models are available at: https://github.com/anon022/IVIFormer
Related Material
