Image Quality Assessment Using Synthetic Images

Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli, Alan C. Bovik; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, 2022, pp. 93-102


Training deep models using contrastive learning has achieved impressive performances on various computer vision tasks. Since training is done in a self-supervised manner on unlabeled data, contrastive learning is an attractive candidate for applications for which large labeled datasets are hard/expensive to obtain. In this work we investigate the outcomes of using contrastive learning on synthetically generated images for the Image Quality Assessment (IQA) problem. The training data consists of computer generated images corrupted with predetermined distortion types. Predicting distortion type and degree is used as an auxiliary task to learn image quality features. The learned representations are then used to predict quality in a No-Reference (NR) setting on real-world images. We show through extensive experiments that this model achieves comparable performance to state-of-the-art NR image quality models when evaluated on real images afflicted with synthetic distortions, even without using any real images during training. Our results indicate that training with synthetically generated images can also lead to effective, and perceptually relevant representations.

Related Material

@InProceedings{Madhusudana_2022_WACV, author = {Madhusudana, Pavan C. and Birkbeck, Neil and Wang, Yilin and Adsumilli, Balu and Bovik, Alan C.}, title = {Image Quality Assessment Using Synthetic Images}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2022}, pages = {93-102} }