Honeybee: Locality-enhanced Projector for Multimodal LLM

Junbum Cha, Wooyoung Kang, Jonghwan Mun, Byungseok Roh; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 13817-13827

Abstract


In Multimodal Large Language Models (MLLMs) a visual projector plays a crucial role in bridging pre-trained vision encoders with LLMs enabling profound visual understanding while harnessing the LLMs' robust capabilities. Despite the importance of the visual projector it has been relatively less explored. In this study we first identify two essential projector properties: (i) flexibility in managing the number of visual tokens crucial for MLLMs' overall efficiency and (ii) preservation of local context from visual features vital for spatial understanding. Based on these findings we propose a novel projector design that is both flexible and locality-enhanced effectively satisfying the two desirable properties. Additionally we present comprehensive strategies to effectively utilize multiple and multifaceted instruction datasets. Through extensive experiments we examine the impact of individual design choices. Finally our proposed MLLM Honeybee remarkably outperforms previous state-of-the-art methods across various benchmarks including MME MMBench SEED-Bench and LLaVA-Bench achieving significantly higher efficiency. Code and models are available at https://github.com/kakaobrain/honeybee.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Cha_2024_CVPR, author = {Cha, Junbum and Kang, Wooyoung and Mun, Jonghwan and Roh, Byungseok}, title = {Honeybee: Locality-enhanced Projector for Multimodal LLM}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {13817-13827} }