Image Correction via Deep Reciprocating HDR Transformation

Xin Yang, Ke Xu, Yibing Song, Qiang Zhang, Xiaopeng Wei, Rynson W.H. Lau; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1798-1807

Abstract


Image correction aims to adjust an input image into a visually pleasing one with the detail in the under/over exposed regions recovered. However, existing image correction methods are mainly based on image pixel operations, and attempting to recover the lost detail from these under/over exposed regions is challenging. We, therefore, revisit the image formation procedure and notice that detail is contained in the high dynamic range (HDR) light intensities(which can be perceived by human eyes) but is lost during the nonlinear imaging process by of the camera in the low dynamic range (LDR) domain. Inspired by this observation, we formulate the image correction problem as the Deep Reciprocating HDR Transformation (DRHT) process and propose a novel approach to first reconstruct the lost detail in the HDR domain and then transfer them back to the LDR image as the output image with the recovered detail preserved. To this end, we propose an end-to-end DRHT model, which contains two CNNs, one for HDR detail reconstruction and the other for LDR detail correction. Experiments on the standard benchmarks demonstrate the effectiveness of the proposed method, compared with state-of-the-art image correction methods.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yang_2018_CVPR,
author = {Yang, Xin and Xu, Ke and Song, Yibing and Zhang, Qiang and Wei, Xiaopeng and Lau, Rynson W.H.},
title = {Image Correction via Deep Reciprocating HDR Transformation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}