MRFS: Mutually Reinforcing Image Fusion and Segmentation

Hao Zhang, Xuhui Zuo, Jie Jiang, Chunchao Guo, Jiayi Ma; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 26974-26983

Abstract


This paper proposes a coupled learning framework to break the performance bottleneck of infrared-visible image fusion and segmentation called MRFS. By leveraging the intrinsic consistency between vision and semantics it emphasizes mutual reinforcement rather than treating these tasks as separate issues. First we embed weakened information recovery and salient information integration into the image fusion task employing the CNN-based interactive gated mixed attention (IGM-Att) module to extract high-quality visual features. This aims to satisfy human visual perception producing fused images with rich textures high contrast and vivid colors. Second a transformer-based progressive cycle attention (PC-Att) module is developed to enhance semantic segmentation. It establishes single-modal self-reinforcement and cross-modal mutual complementarity enabling more accurate decisions in machine semantic perception. Then the cascade of IGM-Att and PC-Att couples image fusion and semantic segmentation tasks implicitly bringing vision-related and semantics-related features into closer alignment. Therefore they mutually provide learning priors to each other resulting in visually satisfying fused images and more accurate segmentation decisions. Extensive experiments on public datasets showcase the advantages of our method in terms of visual satisfaction and decision accuracy. The code is publicly available at https://github.com/HaoZhang1018/MRFS.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Hao and Zuo, Xuhui and Jiang, Jie and Guo, Chunchao and Ma, Jiayi}, title = {MRFS: Mutually Reinforcing Image Fusion and Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26974-26983} }