-
[pdf]
[bibtex]@InProceedings{Safonov_2025_ICCV, author = {Safonov, Nickolay and Rakhmanov, Mikhail and Vatolin, Dmitriy and Timofte, Radu and Wu, Chunyu and Wu, Kejing and Patro, Kishor and Rathour, Pankaj and Channappayya, Sumohana and Pardhi, Pravin and Kamble, Vipin and Bhurchandi, Kishor and Liu, Biao and Hu, Jin and Xu, Jinyang and Dayu, Yang and Yihua, Chen}, title = {AIM 2025 Challenge on Screen-Content Video Quality Assessment: Methods and Results}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {5714-5722} }
AIM 2025 Challenge on Screen-Content Video Quality Assessment: Methods and Results
Abstract
This paper presents an overview of the AIM 2025 Challenge on Screen Content Video Quality Assessment. The challenge included a set of 150 source videos. To receive distorted versions, the source videos were transmitted through video conferencing applications, introducing real-world distortions such as compression artifacts and frame drops. Distorted versions were labeled by human crowdsourcing assessors to receive reference subjective scores. The evaluation was based on subjective quality assessment via crowdsourcing, obtaining votes from over 8,000 assessors. The goal of the participants was to develop an algorithm to assess the visual quality of the videos, achieving the highest correlation with the subjective scores. The challenge attracted more than 45 registered teams, 5 of which passed the final phase with source code verification. The outcomes may provide insights into the state of the art in screen-content video quality assessment and highlight emerging trends and effective strategies in this evolving research area. All data, including the processed videos and subjective comparison votes and scores, is made publicly available -- https://github.com/msu-video-group/AIM25_SC_Quality_Assessment
Related Material
