Transferring Unconditional to Conditional GANs With Hyper-Modulation

Héctor Laria, Yaxing Wang, Joost van de Weijer, Bogdan Raducanu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 3840-3849

Abstract


GANs have matured in recent years and are able to generate high-resolution, realistic images. However, the computational resources and the data required for the training of high-quality GANs are enormous, and the study of transfer learning of these models is therefore an urgent topic. Many of the available high-quality pretrained GANs are unconditional (like StyleGAN). For many applications, however, conditional GANs are preferable, because they provide more control over the generation process, despite often suffering more training difficulties. Therefore, in this paper, we focus on transferring from high-quality pretrained unconditional GANs to conditional GANs. This requires architectural adaptation of the pretrained GAN to perform the conditioning. To this end, we propose hyper-modulated generative networks that allow for shared and complementary supervision.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Laria_2022_CVPR, author = {Laria, H\'ector and Wang, Yaxing and van de Weijer, Joost and Raducanu, Bogdan}, title = {Transferring Unconditional to Conditional GANs With Hyper-Modulation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {3840-3849} }