-
[pdf]
[arXiv]
[bibtex]@InProceedings{Conde_2024_CVPR, author = {Conde, Marcos V. and Zadtootaghaj, Saman and Barman, Nabajeet and Timofte, Radu and He, Chenlong and Zheng, Qi and Zhu, Ruoxi and Tu, Zhengzhong and Wang, Haiqiang and Chen, Xiangguang and Meng, Wenhui and Pan, Xiang and Shi, Huiying and Zhu, Han and Xu, Xiaozhong and Sun, Lei and Chen, Zhenzhong and Liu, Shan and Zhang, Zicheng and Wu, Haoning and Zhou, Yingjie and Li, Chunyi and Liu, Xiaohong and Lin, Weisi and Zhai, Guangtao and Sun, Wei and Cao, Yuqin and Jiang, Yanwei and Jia, Jun and Zhang, Zhichao and Chen, Zijian and Zhang, Weixia and Min, Xiongkuo and Goring, Steve and Qi, Zihao and Feng, Chen}, title = {AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {5826-5837} }
AIS 2024 Challenge on Video Quality Assessment of User-Generated Content: Methods and Results
Abstract
This paper reviews the AIS 2024 Video Quality Assessment (VQA) Challenge focused on User-Generated Content (UGC). The aim of this challenge is to gather deep learning-based methods capable of estimating the perceptual quality of UGC videos. The user-generated videos from the YouTube UGC Dataset include diverse content (sports games lyrics anime etc.) quality and resolutions. The proposed methods must process 30 FHD frames under 1 second. In the challenge a total of 102 participants registered and 15 submitted results during the challenge period. The performance of the top-5 submissions is reviewed and provided here as a survey of diverse deep models for Video Quality Assessment of user-generated content.
Related Material