-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhang_2024_CVPR, author = {Zhang, Xiaohui and Yoon, Jaehong and Bansal, Mohit and Yao, Huaxiu}, title = {Multimodal Representation Learning by Alternating Unimodal Adaptation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {27456-27466} }
Multimodal Representation Learning by Alternating Unimodal Adaptation
Abstract
Multimodal learning which integrates data from diverse sensory modes plays a pivotal role in artificial intelligence. However existing multimodal learning methods often struggle with challenges where some modalities appear more dominant than others during multimodal learning resulting in suboptimal performance. To address this challenge we propose MLA (Multimodal Learning with Alternating Unimodal Adaptation). MLA reframes the conventional joint multimodal learning process by transforming it into an alternating unimodal learning process thereby minimizing interference between modalities. Simultaneously it captures cross-modal interactions through a shared head which undergoes continuous optimization across different modalities. This optimization process is controlled by a gradient modification mechanism to prevent the shared head from losing previously acquired information. During the inference phase MLA utilizes a test-time uncertainty-based model fusion mechanism to integrate multimodal information. Extensive experiments are conducted on five diverse datasets encompassing scenarios with complete modalities and scenarios with missing modalities. These experiments demonstrate the superiority of MLA over competing prior approaches. Our code is available at https://github.com/Cecile-hi/Multimodal-Learning-with-Alternating-Unimodal-Adaptation.
Related Material