LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models

Gabriela Ben Melech Stan, Estelle Aflalo, Raanan Yehezkel Rohekar, Anahita Bhiwandiwalla, Shao-Yen Tseng, Matthew Lyle Olson, Yaniv Gurwicz, Chenfei Wu, Nan Duan, Vasudev Lal; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 8182-8187

Abstract


In the rapidly evolving landscape of artificial intelligence multi-modal large language models are emerging as a significant area of interest. These models which combine various forms of data input are becoming increasingly popular. However understanding their internal mechanisms remains a complex task. Numerous advancements have been made in the field of explainability tools and mechanisms yet there is still much to explore. In this work we present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models. Our interface is designed to enhance the interpretability of the image patches which are instrumental in generating an answer and assess the efficacy of the language model in grounding its output in the image. With our application a user can systematically investigate the model and uncover system limitations paving the way for enhancements in system capabilities. Finally we present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.

Related Material


[pdf]
[bibtex]
@InProceedings{Ben_Melech_Stan_2024_CVPR, author = {Ben Melech Stan, Gabriela and Aflalo, Estelle and Rohekar, Raanan Yehezkel and Bhiwandiwalla, Anahita and Tseng, Shao-Yen and Olson, Matthew Lyle and Gurwicz, Yaniv and Wu, Chenfei and Duan, Nan and Lal, Vasudev}, title = {LVLM-Intrepret: An Interpretability Tool for Large Vision-Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {8182-8187} }