Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment

Sewoong Ahn, Yeji Choi, Kwangjin Yoon; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 344-353

Abstract


Previous full-reference image quality assessment methods aim to evaluate the quality of images impaired by traditional distortions such as JPEG, white noise, Gaussian blur, and so on. However, there is a lack of research predicting the quality of images generated by various image processing algorithms, including super-resolution, denoising, restoration, etc. Motivated by the previous model that predicts the distortion sensitivity maps, we use the DeepQA as a baseline model on a challenge database that includes various distortions. We have further improved the baseline model by dividing it into three parts and modifying each: 1) distortion encoding network, 2) sensitivity generation network, and 3) score regression. Through rigorous experiments, the proposed model achieves better prediction accuracy on the challenge database than other methods which predict the quality of images. Also, the proposed method shows better visualization results compared to the baseline model. We submitted our model in NTIRE 2021 perceptual image quality assessment challenge and won 12th in the main score.

Related Material


[pdf]
[bibtex]
@InProceedings{Ahn_2021_CVPR, author = {Ahn, Sewoong and Choi, Yeji and Yoon, Kwangjin}, title = {Deep Learning-Based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {344-353} }