RankIQA: Learning From Rankings for No-Reference Image Quality Assessment

Xialei Liu, Joost van de Weijer, Andrew D. Bagdanov; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1040-1049

Abstract


We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images. We demonstrate how our approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch. Experiments on the TID2013 benchmark show that we improve the state-of-the-art by over 5%. Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Liu_2017_ICCV,
author = {Liu, Xialei and van de Weijer, Joost and Bagdanov, Andrew D.},
title = {RankIQA: Learning From Rankings for No-Reference Image Quality Assessment},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}