VinVL: Revisiting Visual Representations in Vision-Language Models

Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5579-5588

Abstract


This paper presents a detailed study of improving vision features and develops an improved object detection model for vision language (VL) tasks. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, pre-trained on much larger training corpora that combine multiple public annotated object detection datasets, and thus can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses solely on improving the vision-language fusion model and leaves the object detection model improvement untouched, we present an empirical study to show that vision features matter significantly in VL models. In our experiments we feed the vision features generated by the new object detection model into a pre-trained transformer-based VL fusion model Oscar+, and fine-tune Oscar+ on a wide range of downstream VL tasks. Our results show that the new vision features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. We will release the new object detection model to public.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhang_2021_CVPR, author = {Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng}, title = {VinVL: Revisiting Visual Representations in Vision-Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {5579-5588} }