-
[pdf]
[arXiv]
[bibtex]@InProceedings{Ren_2024_CVPR, author = {Ren, Bin and Li, Yawei and Mehta, Nancy and Timofte, Radu and Yu, Hongyuan and Wan, Cheng and Hong, Yuxin and Han, Bingnan and Wu, Zhuoyuan and Zou, Yajun and Liu, Yuqing and Li, Jizhe and He, Keji and Fan, Chao and Zhang, Heng and Zhang, Xiaolin and Yin, Xuanwu and Zuo, Kunlong and Liao, Bohao and Xia, Peizhe and Peng, Long and Du, Zhibo and Di, Xin and Li, Wangkai and Wang, Yang and Zhai, Wei and Pei, Renjing and Guo, Jiaming and Xu, Songcen and Cao, Yang and Zha, Zhengjun and Wang, Yan and Liu, Yi and Wang, Qing and Zhang, Gang and Zhang, Liou and Zhao, Shijie and Sun, Long and Pan, Jinshan and Dong, Jiangxin and Tang, Jinhui and Liu, Xin and Yan, Min and Wang, Qian and Zhou, Menghan and Yan, Yiqiang and Liu, Yixuan and Chan, Wensong and Tang, Dehua and Zhou, Dong and Wang, Li and Tian, Lu and Emad, Barsoum and Jia, Bohan and Qiao, Junbo and Zhou, Yunshuai and Zhang, Yun and Li, Wei and Lin, Shaohui and Zhou, Shenglong and Chen, Binbin and Liao, Jincheng and Zhao, Suiyi and Zhang, Zhao and Wang, Bo and Luo, Yan and Wei, Yanyan and Li, Feng and Wang, Mingshen and Li, Yawei and Guan, Jinhan and Hu, Dehua and Yu, Jiawei and Xu, Qisheng and Sun, Tao and Lan, Long and Xu, Kele and Lin, Xin and Yue, Jingtong and Yang, Lehan and Du, Shiyi and Qi, Lu and Ren, Chao and Han, Zeyu and Wang, Yuhan and Chen, Chaolin and Li, Haobo and Zheng, Mingjun and Yang, Zhongbao and Song, Lianhong and Yan, Xingzhuo and Fu, Minghan and Zhang, Jingyi and Li, Baiang and Zhu, Qi and Xu, Xiaogang and Guo, Dan and Guo, Chunle and Chen, Jiadi and Long, Huanhuan and Duanmu, Chunjiang and Lei, Xiaoyan and Liu, Jie and Jia, Weilin and Cao, Weifeng and Zhang, Wenlong and Mao, Yanyu and Guo, Ruilong and Zhang, Nihao and Wang, Qian and Pandey, Manoj and Chernozhukov, Maksym and Le, Giang and Cheng, Shuli and Wang, Hongyuan and Wei, Ziyan and Tang, Qingting and Wang, Liejun and Li, Yongming and Guo, Yanhui and Xu, Hao and Khatami-Rizi, Akram and Mahmoudi-Aznaveh, Ahamad and Hsu, Chih-Chung and Lee, Chia-Ming and Chou, Yi-Shiuan and Joshi, Amogh and Akalwadi, Nikhil and Malagi, Sampada and Yashaswini, Palani and Desai, Chaitra and Tabib, Ramesh Ashok and Patil, Ujwala and Mudenagudi, Uma}, title = {The Ninth NTIRE 2024 Efficient Super-Resolution Challenge Report}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {6595-6631} }
The Ninth NTIRE 2024 Efficient Super-Resolution Challenge Report
Abstract
This paper provides a comprehensive review of the NTIRE 2024 challenge focusing on efficient single-image super-resolution (ESR) solutions and their outcomes. The task of this challenge is to super-resolve an input image with a magnification factor of x4 based on pairs of low and corresponding high-resolution images. The primary objective is to develop networks that optimize various aspects such as runtime parameters and FLOPs while still maintaining a peak signal-to-noise ratio (PSNR) of approximately 26.90 dB on the DIV2K_LSDIR_valid dataset and 26.99 dB on the DIV2K_LSDIR_test dataset. In addition this challenge has 4 tracks including the main track (overall performance) sub-track 1 (runtime) sub-track 2 (FLOPs) and sub-track 3 (parameters). In the main track all three metrics (i.e. runtime FLOPs and parameter count) were considered. The ranking of the main track is calculated based on a weighted sum-up of the scores of all other sub-tracks. In sub-track 1 the practical runtime performance of the submissions was evaluated and the corresponding score was used to determine the ranking. In sub-track 2 the number of FLOPs was considered. The score calculated based on the corresponding FLOPs was used to determine the ranking. In sub-track 3 the number of parameters was considered. The score calculated based on the corresponding parameters was used to determine the ranking. RLFN is set as the baseline for efficiency measurement. The challenge had 262 registered participants and 34 teams made valid submissions. They gauge the state-of-the-art in efficient single-image super-resolution. To facilitate the reproducibility of the challenge and enable other researchers to build upon these findings the code and the pre-trained model of validated solutions are made publicly available at https://github.com/Amazingren/NTIRE2024_ESR/.
Related Material
