VILA: On Pre-training for Visual Language Models

Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, Song Han; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 26689-26699

Abstract


Visual language models (VLMs) rapidly progressed with the recent success of large language models. There have been growing efforts on visual instruction tuning to extend the LLM with visual inputs but lacks an in-depth study of the visual language pre-training process where the model learns to perform joint modeling on both modalities. In this work we examine the design options for VLM pre-training by augmenting LLM towards VLM through step-by-step con- trollable comparisons. We introduce three main findings: (1) freezing LLMs during pre-training can achieve decent zero-shot performance but lack in-context learning capabil- ity which requires unfreezing the LLM; (2) interleaved pre- training data is beneficial whereas image-text pairs alone are not optimal; (3) re-blending text-only instruction data to image-text data during instruction fine-tuning not only remedies the degradation of text-only tasks but also boosts VLM task accuracy. With an enhanced pre-training recipe we build VILA a Visual Language model family that consis- tently outperforms the state-of-the-art models e.g. LLaVA- 1.5 across main benchmarks without bells and whistles. Multi-modal pre-training also helps unveil appealing prop- erties of VILA including multi-image reasoning enhanced in-context learning and better world knowledge. VILA is also deployable on Jetson Orin for on-device VLM.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lin_2024_CVPR, author = {Lin, Ji and Yin, Hongxu and Ping, Wei and Molchanov, Pavlo and Shoeybi, Mohammad and Han, Song}, title = {VILA: On Pre-training for Visual Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {26689-26699} }