An Investigation on LLMs' Visual Understanding Ability using SVG for Image-Text Bridging

Mu Cai, Zeyi Huang, Yuheng Li, Utkarsh Ojha, Haohan Wang, Yong Jae Lee; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 5377-5386

Abstract


Large language models (LLMs) have made significant advancements in natural language understanding. However through that enormous semantic representation that the LLM has learnt is it somehow possible for it to understand images as well? This work investigates this question. To enable the LLM to process images we convert them into a representation given by Scalable Vector Graphics (SVG). To study what the LLM can do with this XML-based textual description of images we test the LLM on three broad computer vision tasks: (i) visual reasoning and question answering (ii) image classification under distribution shift few-shot learning and (iii) generating new images using visual prompting. Even though we do not naturally associate LLMs with any visual understanding capabilities our results indicate that the LLM can often do a decent job in many of these tasks potentially opening new avenues for research into LLMs' ability to understand image data. Our code data and models can be found here https://github.com/mu-cai/svg-llm.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Cai_2025_WACV, author = {Cai, Mu and Huang, Zeyi and Li, Yuheng and Ojha, Utkarsh and Wang, Haohan and Lee, Yong Jae}, title = {An Investigation on LLMs' Visual Understanding Ability using SVG for Image-Text Bridging}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {5377-5386} }