Deep Single-Image Portrait Relighting

Hao Zhou, Sunil Hadap, Kalyan Sunkavalli, David W. Jacobs; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 7194-7202

Abstract


Conventional physically-based methods for relighting portrait images need to solve an inverse rendering problem, estimating face geometry, reflectance and lighting. However, the inaccurate estimation of face components can cause strong artifacts in relighting, leading to unsatisfactory results. In this work, we apply a physically-based portrait relighting method to generate a large scale, high quality, "in the wild" portrait relighting dataset (DPR). A deep Convolutional Neural Network (CNN) is then trained using this dataset to generate a relit portrait image by using a source image and a target lighting as input. The training procedure regularizes the generated results, removing the artifacts caused by physically-based relighting methods. A GAN loss is further applied to improve the quality of the relit portrait image. Our trained network can relight portrait images with resolutions as high as 1024 x 1024. We evaluate the proposed method on the proposed DPR datset, Flickr portrait dataset and Multi-PIE dataset both qualitatively and quantitatively. Our experiments demonstrate that the proposed method achieves state-of-the-art results. Please refer to https://zhhoper.github.io/dpr.html for dataset and code.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhou_2019_ICCV,
author = {Zhou, Hao and Hadap, Sunil and Sunkavalli, Kalyan and Jacobs, David W.},
title = {Deep Single-Image Portrait Relighting},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}