Improved Baselines with Visual Instruction Tuning

Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 26296-26306

Abstract


Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this paper we present the first systematic study to investigate the design choices of LMMs in a controlled setting under the LLaVA framework. We show that the fully-connected vision-language connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA namely using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with response formatting prompts we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data and finishes full training in 1 day on a single 8-A100 node. Furthermore we present some early exploration of open problems in LMMs including scaling to higher resolution inputs compositional capabilities and model hallucination etc. We hope this makes state-of-the-art LMM research more accessible. Code and model will be publicly available.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Liu_2024_CVPR, author = {Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae}, title = {Improved Baselines with Visual Instruction Tuning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26296-26306} }