Label Denoising Adversarial Network (LDAN) for Inverse Lighting of Faces

Hao Zhou, Jin Sun, Yaser Yacoob, David W. Jacobs; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6238-6247

Abstract


Lighting estimation from faces is an important task and has applications in many areas such as image editing, intrinsic image decomposition, and image forgery detection. We propose to train a deep Convolutional Neural Network (CNN) to regress lighting parameters from a single face image. Lacking massive ground truth lighting labels for face images in the wild, we use an existing method to estimate lighting parameters, which are treated as ground truth with noise. To alleviate the effect of such noise, we utilize the idea of Generative Adversarial Networks (GAN) and propose a Label Denoising Adversarial Network (LDAN). LDAN makes use of synthetic data with accurate ground truth to help train a deep CNN for lighting regression on real face images. Experiments show that our network outperforms existing methods in producing consistent lighting parameters of different faces under similar lighting conditions. To further evaluate the proposed method, we also apply it to regress object 2D key points where ground truth labels are available. Our experiments demonstrate its effectiveness on this application.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Zhou_2018_CVPR,
author = {Zhou, Hao and Sun, Jin and Yacoob, Yaser and Jacobs, David W.},
title = {Label Denoising Adversarial Network (LDAN) for Inverse Lighting of Faces},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}