@BENCH: Benchmarking Vision-Language Models for Human-Centered Assistive Technology

Xin Jiang, Junwei Zheng, Ruiping Liu, Jiahang Li, Jiaming Zhang, Sven Matthiesen, Rainer Stiefelhagen; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 3934-3943

Abstract


As Vision-Language Models (VLMs) advance human-centered Assistive Technologies (ATs) for helping People with Visual Impairments (PVIs) are evolving into generalists capable of performing multiple tasks simultaneously. However benchmarking VLMs for ATs remains under-explored. To bridge this gap we first create a novel AT benchmark (@BENCH). Guided by a pre-design user study with PVIs our benchmark includes the five most crucial vision-language tasks: Panoptic Segmentation Depth Estimation Optical Character Recognition (OCR) Image Captioning and Visual Question Answering (VQA). Besides we propose a novel AT model (@MODEL) that addresses all tasks simultaneously and can be expanded to more assistive functions for helping PVIs. Our framework exhibits outstanding performance across tasks by integrating multi-modal information and it offers PVIs a more comprehensive assistance. Extensive experiments prove the effectiveness and generalizability of our framework.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jiang_2025_WACV, author = {Jiang, Xin and Zheng, Junwei and Liu, Ruiping and Li, Jiahang and Zhang, Jiaming and Matthiesen, Sven and Stiefelhagen, Rainer}, title = {@BENCH: Benchmarking Vision-Language Models for Human-Centered Assistive Technology}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3934-3943} }