Temporal Generative Adversarial Nets With Singular Value Clipping

Masaki Saito, Eiichi Matsumoto, Shunta Saito; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2830-2839

Abstract


In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs a set of latent variables, each of which corresponds to an image frame in a video. The image generator transforms a set of such latent variables into a video. To deal with instability in training of GAN with such advanced networks, we adopt a recently proposed model, Wasserstein GAN, and propose a novel method to train it stably in an end-to-end manner. The experimental results demonstrate the effectiveness of our methods.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Saito_2017_ICCV,
author = {Saito, Masaki and Matsumoto, Eiichi and Saito, Shunta},
title = {Temporal Generative Adversarial Nets With Singular Value Clipping},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}