WALDO: Future Video Synthesis Using Object Layer Decomposition and Parametric Flow Prediction

Guillaume Le Moing, Jean Ponce, Cordelia Schmid; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 23229-23241

Abstract


This paper presents WALDO (WArping Layer-Decomposed Objects), a novel approach to the prediction of future video frames from past ones. Individual images are decomposed into multiple layers combining object masks and a small set of control points. The layer structure is shared across all frames in each video to build dense inter-frame connections. Complex scene motions are modeled by combining parametric geometric transformations associated with individual layers, and video synthesis is broken down into discovering the layers associated with past frames, predicting the corresponding transformations for upcoming ones and warping the associated object regions accordingly, and filling in the remaining image parts. Extensive experiments on multiple benchmarks including urban videos (Cityscapes and KITTI) and videos featuring nonrigid motions (UCF-Sports and H3.6M), show that our method consistently outperforms the state of the art by a significant margin in every case. Code, pretrained models, and video samples synthesized by our approach can be found in the project webpage.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Le_Moing_2023_ICCV, author = {Le Moing, Guillaume and Ponce, Jean and Schmid, Cordelia}, title = {WALDO: Future Video Synthesis Using Object Layer Decomposition and Parametric Flow Prediction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {23229-23241} }