-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Bang_2024_CVPR, author = {Bang, Jihwan and Ahn, Sumyeong and Lee, Jae-Gil}, title = {Active Prompt Learning in Vision Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {27004-27014} }
Active Prompt Learning in Vision Language Models
Abstract
Pre-trained Vision Language Models (VLMs) have demonstrated notable progress in various zero-shot tasks such as classification and retrieval. Despite their performance because improving performance on new tasks requires task-specific knowledge their adaptation is essential. While labels are needed for the adaptation acquiring them is typically expensive. To overcome this challenge active learning a method of achieving a high performance by obtaining labels for a small number of samples from experts has been studied. Active learning primarily focuses on selecting unlabeled samples for labeling and leveraging them to train models. In this study we pose the question "how can the pre-trained VLMs be adapted under the active learning framework?" In response to this inquiry we observe that (1) simply applying a conventional active learning framework to pre-trained VLMs even may degrade performance compared to random selection because of the class imbalance in labeling candidates and (2) the knowledge of VLMs can provide hints for achieving the balance before labeling. Based on these observations we devise a novel active learning framework for VLMs denoted as PCB. To assess the effectiveness of our approach we conduct experiments on seven different real-world datasets and the results demonstrate that PCB surpasses conventional active learning and random sampling methods.
Related Material