One-Shot GAN: Learning To Generate Samples From Single Images and Videos

Vadim Sushko, Jurgen Gall, Anna Khoreva; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2596-2600

Abstract


Training GANs in low-data regimes remains a challenge, as overfitting often leads to memorization or training divergence. In this work, we introduce One-Shot GAN that can learn to generate samples from a training set as little as one image or one video. We propose a two-branch discriminator, with content and layout branches designed to judge the internal content separately from the scene layout realism. This allows synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single-image GAN models, One-Shot GAN achieves higher diversity and quality of synthesis. It is also not restricted to the single image setting, successfully learning in the introduced setting of a single video.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Sushko_2021_CVPR, author = {Sushko, Vadim and Gall, Jurgen and Khoreva, Anna}, title = {One-Shot GAN: Learning To Generate Samples From Single Images and Videos}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2596-2600} }