Guiding Video Prediction with Explicit Procedural Knowledge

Patrick Takenaka, Johannes Maucher, Marco F. Huber; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 1084-1092

Abstract


We propose a general way to integrate procedural knowledge of a domain into deep learning models. We apply it to the case of video prediction, building on top of objectcentric deep models and show that this leads to a better performance than using data-driven models alone. We develop an architecture that facilitates latent space disentanglement in order to use the integrated procedural knowledge, and establish a setup that allows the model to learn the procedural interface in the latent space using the downstream task of video prediction. We contrast the performance to a state-of-the-art data-driven approach and show that problems where purely data-driven approaches struggle can be handled by using knowledge about the domain, providing an alternative to simply collecting more data.

Related Material


[pdf]
[bibtex]
@InProceedings{Takenaka_2023_ICCV, author = {Takenaka, Patrick and Maucher, Johannes and Huber, Marco F.}, title = {Guiding Video Prediction with Explicit Procedural Knowledge}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {1084-1092} }