Querying as Prompt: Parameter-Efficient Learning for Multimodal Language Model

Tian Liang, Jing Huang, Ming Kong, Luyuan Chen, Qiang Zhu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 26855-26865

Abstract


Recent advancements in language models pre-trained on large-scale corpora have significantly propelled developments in the NLP domain and advanced progress in multimodal tasks. In this paper we propose a Parameter-Efficient multimodal language model learning strategy named QaP (Querying as Prompt). Its core innovation is a novel modality-bridging method that allows a set of modality-specific queries to be input as soft prompts into a frozen pre-trained language model. Specifically we introduce an efficient Text-Conditioned Resampler that is easy to incorporate into the language models which enables adaptive injection of text-related multimodal information at different levels of the model through query learning. This approach effectively bridges multimodal information to the language models while fully leveraging its token fusion and representation potential. We validated our method across four datasets in three distinct multimodal tasks. The results demonstrate that our QaP multimodal language model achieves state-of-the-art performance in various tasks with training only 4.6% parameters.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liang_2024_CVPR, author = {Liang, Tian and Huang, Jing and Kong, Ming and Chen, Luyuan and Zhu, Qiang}, title = {Querying as Prompt: Parameter-Efficient Learning for Multimodal Language Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26855-26865} }