Exploring Disentangled Feature Representation Beyond Face Identification
Yu Liu, Fangyin Wei, Jing Shao, Lu Sheng, Junjie Yan, Xiaogang Wang; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2080-2089
Abstract
This paper proposes learning disentangled but complementary face features with a minimal supervision by face identification. Specifically, we construct an identity Distilling and Dispelling Auto-Encoder (D^2AE) framework that adversarially learns the identity-distilled features for identity verification and the identity-dispelled features to fool the verification system. Thanks to the design of two-stream cues, the learned disentangled features represent not only the identity or attribute but the complete input image. Comprehensive evaluations further demonstrate that the proposed features not only preserve state-of-the-art identity verification performance on LFW, but also acquire comparable discriminative power for face attribute recognition on CelebA and LFWA. Moreover, the proposed system is ready to semantically control the face generation/editing based on various identities and attributes in an unsupervised manner.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Liu_2018_CVPR,
author = {Liu, Yu and Wei, Fangyin and Shao, Jing and Sheng, Lu and Yan, Junjie and Wang, Xiaogang},
title = {Exploring Disentangled Feature Representation Beyond Face Identification},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}