A Retrospect to Multi-prompt Learning across Vision and Language

Ziliang Chen, Xin Huang, Quanlong Guan, Liang Lin, Weiqi Luo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 22190-22201

Abstract


The vision community is undergoing the unprecedented progress with the emergence of Vision-Language Pretraining Models (VLMs). Prompt learning plays as the holy grail of accessing VLMs since it enables their fast adaptation to downstream tasks with limited resources. Whereas existing research milling around single-prompt paradigms, rarely investigate the technical potential behind their multi-prompt learning counterparts. This paper aims to provide a principled retrospect for vision-language multi-prompt learning. We extend the recent constant modality gap phenomenon to learnable prompts and then, justify the superiority of vision-language transfer with multi-prompt augmentation, empirically and theoretically. In terms of this observation, we propose an Energy-based Multi-prompt Learning (EMPL) to generate multiple prompt embeddings by drawing instances from an energy-based distribution, which is implicitly defined by VLMs. So our EMPL is not only parameter-efficient but also rigorously lead to the balance between in-domain and out-of-domain open-vocabulary generalization. Comprehensive experiments have been conducted to justify our claims and the excellence of EMPL.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2023_ICCV, author = {Chen, Ziliang and Huang, Xin and Guan, Quanlong and Lin, Liang and Luo, Weiqi}, title = {A Retrospect to Multi-prompt Learning across Vision and Language}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {22190-22201} }