Neural Face Editing With Intrinsic Image Disentangling
Zhixin Shu, Ersin Yumer, Sunil Hadap, Kalyan Sunkavalli, Eli Shechtman, Dimitris Samaras; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5541-5550
Abstract
Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other --- a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on "in-the-wild" images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.
Related Material
[pdf]
[supp]
[arXiv]
[video]
[
bibtex]
@InProceedings{Shu_2017_CVPR,
author = {Shu, Zhixin and Yumer, Ersin and Hadap, Sunil and Sunkavalli, Kalyan and Shechtman, Eli and Samaras, Dimitris},
title = {Neural Face Editing With Intrinsic Image Disentangling},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}