Neural Re-Simulation for Generating Bounces in Single Images

Carlo Innamorati, Bryan Russell, Danny M. Kaufman, Niloy J. Mitra; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 8719-8728

Abstract


We introduce a method to generate videos of dynamic virtual objects plausibly interacting via collisions with a still image's environment. Given a starting trajectory, physically simulated with the estimated geometry of a single, static input image, we learn to 'correct' this trajectory to a visually plausible one via a neural network. The neural network can then be seen as learning to 'correct' traditional simulation output, generated with incomplete and imprecise world information, to obtain context-specific, visually plausible re-simulated output - a process we call neural re-simulation. We train our system on a set of 50k synthetic scenes where a virtual moving object (ball) has been physically simulated. We demonstrate our approach on both our synthetic dataset and a collection of real-life images depicting everyday scenes, obtaining consistent improvement over baseline alternatives throughout.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Innamorati_2019_ICCV,
author = {Innamorati, Carlo and Russell, Bryan and Kaufman, Danny M. and Mitra, Niloy J.},
title = {Neural Re-Simulation for Generating Bounces in Single Images},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}