-
[pdf]
[bibtex]@InProceedings{Safonov_2025_CVPR, author = {Safonov, Nickolay and Bryntsev, Alexey and Moskalenko, Andrey and Kulikov, Dmitry and Vatolin, Dmitriy and Timofte, Radu and Lei, Haibo and Gao, Qifan and Luo, Qing and Li, Yaqing and Song, Jie and Hao, Shaozhe and Zheng, Meisong and Xu, Jingyi and Wu, Chengbin and Liu, Jiahui and Chen, Ying and Deng, Xin and Xu, Mai and Liang, Peipei and Ma, Jie and Jin, Junjie and Pang, Yingxue and Luo, Fangzhou and Chen, Kai and Zhao, Shijie and Wu, Mingyang and Li, Renjie and Zuo, Yushen and Tu, Zhengzhong and Zhong, Shengyun}, title = {NTIRE 2025 Challenge on UGC Video Enhancement: Methods and Results}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {1503-1513} }
NTIRE 2025 Challenge on UGC Video Enhancement: Methods and Results
Abstract
This paper presents an overview of the NTIRE 2025 Challenge on UGC Video Enhancement. The challenge constructed a set of 150 user-generated content videos without reference ground truth, which suffer from real-world degradations such as noise, blur, faded colors, compression artifacts, etc. The goal of the participants was to develop an algorithm capable of improving the visual quality of such videos. Given the widespread use of UGC on short-form video platforms, this task holds substantial practical importance. The evaluation was based on subjective quality assessment in crowdsourcing, obtaining votes from over 8000 assessors. The challenge attracted more than 25 teams submitting solutions, 7 of which passed the final phase with source code verification. The outcomes may provide insights into the state-of-the-art in UGC video enhancement and highlight emerging trends and effective strategies in this evolving research area. All data, including the processed videos and subjective comparison votes and scores, is made publicly available -- https://github.com/msu- video- group/NTIRE25_UGC_Video_ Enhancement.
Related Material