GATSBI: Generative Agent-Centric Spatio-Temporal Object Interaction

Cheol-Hui Min, Jinseok Bae, Junho Lee, Young Min Kim; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3074-3083


We present GATSBI, a generative model that can transform a sequence of raw observations into a structured latent representation that fully captures the spatio-temporal context of the agent's actions. In vision-based decision-making scenarios, an agent faces complex high-dimensional observations where multiple entities interact with each other. The agent requires a good scene representation of the visual observation that discerns essential components that consistently propagates along the time horizon. Our method, GATSBI, utilizes unsupervised scene representation learning to successfully separate an active agent, static background, and passive objects. GATSBI then models the interactions reflecting the causal relationships among decomposed entities and predicts physically plausible future states. Our model generalizes to a variety of environments where different types of robots and objects dynamically interact with each other. GATSBI achieves superior performance on scene decompo-sition and video prediction compared to its state-of-the-artcounterparts, and can be readily applied to sequential deci-sion making of an intelligent agent.

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Min_2021_CVPR, author = {Min, Cheol-Hui and Bae, Jinseok and Lee, Junho and Kim, Young Min}, title = {GATSBI: Generative Agent-Centric Spatio-Temporal Object Interaction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {3074-3083} }