-
[pdf]
[supp]
[bibtex]@InProceedings{Jia_2025_ICCV, author = {Jia, Kai and Liu, Tengyu and Pei, Mingtao and Zhu, Yixin and Huang, Siyuan}, title = {PrimHOI: Compositional Human-Object Interaction via Reusable Primitives}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {11491-11501} }
PrimHOI: Compositional Human-Object Interaction via Reusable Primitives
Abstract
Synthesizing realistic Human-Object Interaction (HOI) motions is essential for creating believable digital characters and intelligent robots. Existing approaches rely on data-intensive learning models that struggle with the compositional structure of daily HOI motions, particularly for complex multi-object manipulation tasks. The exponential growth of possible interaction scenarios makes comprehensive data collection prohibitively expensive. The fundamental challenge is synthesizing unseen, complex HOI sequences without extensive task-specific training data. Here we show that PrimHOI generates complex HOI motions through spatial and temporal composition of generalizable interaction primitives defined by relative geometry. Our approach demonstrates that repetitive local contact patterns -- grasping, clamping, and supporting -- serve as reusable building blocks for diverse interaction sequences. Unlike previous data-driven methods requiring end-to-end training for each task variant, PrimHOI achieves zero-shot transfer to unseen scenarios through hierarchical primitive planning. Experimental validation demonstrates substantial improvements in adaptability, diversity, and motion quality compared to existing approaches.
Related Material