Modular Generative Adversarial Networks

Bo Zhao, Bo Chang, Zequn Jie, Leonid Sigal; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 150-165


Existing methods for multi-domain image-to-image translation attempt to directly map inputs to outputs using a single model. However, these methods have limited scalability and robustness. Inspired by module networks, this paper propose ModularGAN for multi-domain image-to-image translation that consists of several reusable and compatible modules of different functions. These modules can be trained simultaneously, and chosen and combined with each other to construct specific networks according to the domains of the image translation task involves. This leads to ModularGAN's superior flexibility of translating an input image to any desired domain. Experimental results demonstrate that our model not only presents compelling perceptual results but also outperform state-of-the-art results on multi-domain facial attribute transfer task.

Related Material

[pdf] [arXiv]
author = {Zhao, Bo and Chang, Bo and Jie, Zequn and Sigal, Leonid},
title = {Modular Generative Adversarial Networks},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}