TinyGAN: Distilling BigGAN for Conditional Image Generation

Ting-Yun Chang, Chi-Jen Lu; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious for their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved the quality of image generation on ImageNet, it requires a huge model, making it hard to deploy on resource-constrained devices. To reduce the model size, we propose a black-box knowledge distillation framework for compressing GANs, which highlights a stable and efficient training process. Given BigGAN as the teacher network, we manage to train a much smaller student network to mimic its functionality, achieving competitive performance on Inception and FID scores with the generator having 16 times fewer parameters.

Related Material


[pdf] [supp] [arXiv] [code]
[bibtex]
@InProceedings{Chang_2020_ACCV, author = {Chang, Ting-Yun and Lu, Chi-Jen}, title = {TinyGAN: Distilling BigGAN for Conditional Image Generation}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }