MultiModal Action Conditioned Video Simulation

Yichen Li, Antonio Torralba; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 14173-14183

Abstract


Current video models fail as world model as they lack fine-graiend control. General-purpose household robots require real-time fine motor control to handle delicate tasks and urgent situations. In this work, we introduce fine-grained multimodal actions to capture such precise control. We consider senses of proprioception, kinesthesia, force haptics, and muscle activation. Such multimodal senses naturally enables fine-grained interactions that are difficult to simulate with text-conditioned generative models. To effectively simulate fine-grained multisensory actions, we develop a feature learning paradigm that aligns these modalities while preserving the unique information each modality provides. We further propose a regularization scheme to enhance causality of the action trajectory features in representing intricate interaction dynamics. Experiments show that incorporating multimodal senses improves simulation accuracy and reduces temporal drift. Extensive ablation studies and downstream applications demonstrate the effectiveness and practicality of our work.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2025_ICCV, author = {Li, Yichen and Torralba, Antonio}, title = {MultiModal Action Conditioned Video Simulation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {14173-14183} }