Multi-Class Multi-Instance Count Conditioned Adversarial Image Generation

Amrutha Saseendran, Kathrin Skubch, Margret Keuper; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 6762-6771


Image generation has rapidly evolved in recent years. Modern architectures for adversarial training allow to generate even high resolution images with remarkable quality. At the same time, more and more effort is dedicated towards controlling the content of generated images. In this paper, we take one further step in this direction and propose a conditional generative adversarial network (GAN) that generates images with a defined number of objects from given classes. This entails two fundamental abilities (1) being able to generate high-quality images given a complex constraint and (2) being able to count object instances per class in a given image. Our proposed model modularly extends the successful StyleGAN2 architecture with a count-based conditioning as well as with a regression sub-network to count the number of generated objects per class during training. In experiments on three different datasets, we show that the proposed model learns to generate images according to the given multiple-class count condition even in the presence of complex backgrounds. In particular, we propose a new dataset, CityCount, which is derived from the Cityscapes street scenes dataset, to evaluate our approach in a challenging and practically relevant scenario. An implementation is available at

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Saseendran_2021_ICCV, author = {Saseendran, Amrutha and Skubch, Kathrin and Keuper, Margret}, title = {Multi-Class Multi-Instance Count Conditioned Adversarial Image Generation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {6762-6771} }