EigenGAN: Layer-Wise Eigen-Learning for GANs

Zhenliang He, Meina Kan, Shiguang Shan; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14408-14417

Abstract


Recent studies on Generative Adversarial Network (GAN) reveal that different layers of a generative CNN hold different semantics of the synthesized images. However, few GAN models have explicit dimensions to control the semantic attributes represented in a specific layer. This paper proposes EigenGAN which is able to unsupervisedly mine interpretable and controllable dimensions from different generator layers. Specifically, EigenGAN embeds one linear subspace with orthogonal basis into each generator layer. Via generative adversarial training to learn a target distribution, these layer-wise subspaces automatically discover a set of "eigen-dimensions" at each layer corresponding to a set of semantic attributes or interpretable variations. By traversing the coefficient of a specific eigen-dimension, the generator can produce samples with continuous changes corresponding to a specific semantic attribute. Taking the human face for example, EigenGAN can discover controllable dimensions for high-level concepts such as pose and gender in the subspace of deep layers, as well as low-level concepts such as hue and color in the subspace of shallow layers. Moreover, in the linear case, we theoretically prove that our algorithm derives the principal components as PCA does. Codes can be found in https://github.com/LynnHo/EigenGAN-Tensorflow.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{He_2021_ICCV, author = {He, Zhenliang and Kan, Meina and Shan, Shiguang}, title = {EigenGAN: Layer-Wise Eigen-Learning for GANs}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14408-14417} }