Information Density Principle for MLLM Benchmarks

Chunyi Li, Xiaozhe Li, Zicheng Zhang, Yuan Tian, Ziheng Jia, Xiaohong Liu, Xiongkuo Min, Jia Wang, Haodong Duan, Kai Chen, Guangtao Zhai; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 4167-4177

Abstract


With the emergence of Multimodal Large Language Models (MLLMs), hundreds of benchmarks have been developed to ensure the reliability of MLLMs in downstream tasks. However, the evaluation mechanism itself may not be reliable. For developers of MLLMs, questions remain about which benchmark to use and whether the test results meet their requirements. Therefore, we propose a critical principle of Information Density, which examines **how much insight a benchmark can provide for the development of MLLMs.** We characterize it from four key dimensions: (1) Fallacy, (2) Difficulty, (3) Redundancy, (4) Diversity. Through a comprehensive analysis of more than 10,000 samples, we measured the information density of 19 MLLM benchmarks. Experiments show that using the latest benchmarks in testing can provide more insight compared to previous ones, but there is still room for improvement in their information density. We hope this principle can promote the development and application of future MLLM benchmarks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Li_2025_ICCV, author = {Li, Chunyi and Li, Xiaozhe and Zhang, Zicheng and Tian, Yuan and Jia, Ziheng and Liu, Xiaohong and Min, Xiongkuo and Wang, Jia and Duan, Haodong and Chen, Kai and Zhai, Guangtao}, title = {Information Density Principle for MLLM Benchmarks}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {4167-4177} }