Temporal Shift GAN for Large Scale Video Generation

Andres Munoz, Mohammadreza Zolfaghari, Max Argus, Thomas Brox; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 3179-3188

Abstract


Video generation models have become increasingly popular in the last few years, however the standard 2D architectures used today lack natural spatio-temporal modelling capabilities. In this paper, we present a network architecture for video generation that models spatio-temporal consistency without resorting to costly 3D architectures. The architecture facilitates information exchange between neighboring time points, which improves the temporal consistency of both the high level structure as well as the low-level details of the generated frames. The approach achieves state-of-the-art quantitative performance, as measured by the inception score on the UCF-101 dataset as well as better qualitative results. We also introduce a new quantitative measure (S3) that uses downstream tasks for evaluation. Moreover, we present a new multi-label dataset MaisToy, which enables us to evaluate the generalization of the model.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Munoz_2021_WACV, author = {Munoz, Andres and Zolfaghari, Mohammadreza and Argus, Max and Brox, Thomas}, title = {Temporal Shift GAN for Large Scale Video Generation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {3179-3188} }