DeepFuse: A Deep Unsupervised Approach for Exposure Fusion With Extreme Exposure Image Pairs

K. Ram Prabhakar, V Sai Srikar, R. Venkatesh Babu; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4714-4722


We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.

Related Material

[pdf] [Supp]
author = {Ram Prabhakar, K. and Sai Srikar, V and Venkatesh Babu, R.},
title = {DeepFuse: A Deep Unsupervised Approach for Exposure Fusion With Extreme Exposure Image Pairs},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}