-
[pdf]
[bibtex]@InProceedings{Han_2022_WACV, author = {Han, Ligong and Musunuri, Sri Harsha and Min, Martin Renqiang and Gao, Ruijiang and Tian, Yu and Metaxas, Dimitris}, title = {AE-StyleGAN: Improved Training of Style-Based Auto-Encoders}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {3134-3143} }
AE-StyleGAN: Improved Training of Style-Based Auto-Encoders
Abstract
StyleGANs have shown impressive results on data generation and manipulation in recent years, thanks to its disentangled style latent space. A lot of efforts have been made in inverting a pre-trained generator, where an encoder is trained ad hoc after the generator is trained in a two-stage fashion. In this paper, we focus on style-based generators asking a scientific question: Does forcing such a generator to reconstruct real data lead to more disentangled latent space and make the inversion process from image to latent space easy? We describe a new methodology to train a style-based autoencoder where the encoder and generator are optimized end-to-end. We show that our proposed model consistently outperforms baselines in terms of image inversion and generation quality.
Related Material