Generating Synthetic Video Sequences by Explicitly Modeling Object Motion

S. Palazzo, C. Spampinato, P. D'Oro, D. Giordano, M. Shah; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Recent GAN-based video generation approaches model videos as the combination of a time-independent scene component and a time-varying motion component, thus factorizing the generation problem into generating background and foreground separately. One of the main limitations of current approaches is that both factors are learned by mapping one source latent space to videos, which complicates the generation task as a single data point must be informative of both background and foreground content. In this paper we propose a GAN framework for video generation that, instead, employs two latent spaces in order to structure the generative process in a more natural way: 1) a latent space to generate the static visual content of a scene (background), which remains the same for the whole video, and 2) a latent space where motion is encoded as a trajectory between sampled points and whose dynamics are modeled through an RNN encoder (jointly trained with the generator and the discriminator) and then mapped by the generator to visual objects’ motion. Performance evaluation showed that our approach is able to control effectively the generation process as well as to synthesize more realistic videos than state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Palazzo_2018_ECCV_Workshops,
author = {Palazzo, S. and Spampinato, C. and D'Oro, P. and Giordano, D. and Shah, M.},
title = {Generating Synthetic Video Sequences by Explicitly Modeling Object Motion},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}