Predicting Visible Image Differences Under Varying Display Brightness and Viewing Distance

Nanyang Ye, Krzysztof Wolski, Rafal K. Mantiuk; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 5434-5442

Abstract


Numerous applications require a robust metric that can predict whether image differences are visible or not. However, the accuracy of existing white-box visibility metrics, such as HDR-VDP, is often not good enough. CNN-based black-box visibility metrics have proven to be more accurate, but they cannot account for differences in viewing conditions, such as display brightness and viewing distance. In this paper, we propose a CNN-based visibility metric, which maintains the accuracy of deep network solutions and accounts for viewing conditions. To achieve this, we extend the existing dataset of locally visible differences (LocVis) with a new set of measurements, collected considering aforementioned viewing conditions. Then, we develop a hybrid model that combines white-box processing stages for modeling the effects of luminance masking and contrast sensitivity, with a black-box deep neural network. We demonstrate that the novel hybrid model can handle the change of viewing conditions correctly and outperforms state-of-the-art metrics.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ye_2019_CVPR,
author = {Ye, Nanyang and Wolski, Krzysztof and Mantiuk, Rafal K.},
title = {Predicting Visible Image Differences Under Varying Display Brightness and Viewing Distance},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}