Learning to Jointly Generate and Separate Reflections

Daiqian Ma, Renjie Wan, Boxin Shi, Alex C. Kot, Ling-Yu Duan; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 2444-2452

Abstract


Existing learning-based single image reflection removal methods using paired training data have fundamental limitations about the generalization capability on real-world reflections due to the limited variations in training pairs. In this work, we propose to jointly generate and separate reflections within a weakly-supervised learning framework, aiming to model the reflection image formation more comprehensively with abundant unpaired supervision. By imposing the adversarial losses and combinable mapping mechanism in a multi-task structure, the proposed framework elegantly integrates the two separate stages of reflection generation and separation into a unified model. The gradient constraint is incorporated into the concurrent training process of the multi-task learning as well. In particular, we built up an unpaired reflection dataset with 4,027 images, which is useful for facilitating the weakly-supervised learning of reflection removal model. Extensive experiments on a public benchmark dataset show that our framework performs favorably against state-of-the-art methods and consistently produces visually appealing results.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ma_2019_ICCV,
author = {Ma, Daiqian and Wan, Renjie and Shi, Boxin and Kot, Alex C. and Duan, Ling-Yu},
title = {Learning to Jointly Generate and Separate Reflections},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}