DYAN: A Dynamical Atoms-Based Network For Video Prediction

Wenqian Liu, Abhishek Sharma, Octavia Camps, Mario Sznaier; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 170-185


The ability to anticipate the future is essential when making real time critical decisions, provides valuable information to understand dynamic natural scenes, and can help unsupervised video representation learning. State-of-art video prediction is based on complex architectures that need to learn large numbers of parameters, are potentially hard to train, slow to run, and may produce blurry predictions. In this paper, we introduce DYAN, a novel network with very few parameters and easy to train, which produces accurate, high quality frame predictions, significantly faster than previous approaches. DYAN owes its good qualities to its encoder and decoder, which are designed following concepts from systems identification theory and exploit the dynamics-based invariants of the data. Extensive experiments using several standard video datasets show that DYAN is superior generating frames and that it generalizes well across domains.

Related Material

[pdf] [arXiv]
author = {Liu, Wenqian and Sharma, Abhishek and Camps, Octavia and Sznaier, Mario},
title = {DYAN: A Dynamical Atoms-Based Network For Video Prediction},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}