-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Jiang_2023_ICCV, author = {Jiang, Nan and Liu, Tengyu and Cao, Zhexuan and Cui, Jieming and Zhang, Zhiyuan and Chen, Yixin and Wang, He and Zhu, Yixin and Huang, Siyuan}, title = {Full-Body Articulated Human-Object Interaction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {9365-9376} }
Full-Body Articulated Human-Object Interaction
Abstract
Fine-grained capture of 3D Human-Object Interactions (HOIs) boosts human activity understanding and facilitates various downstream visual tasks. Prior models mostly assume that humans interact with rigid objects using only a few body parts, limiting their scope. In this paper, we address the challenging problem of Full-Body Articulated Human-Object Interaction (f-AHOI), wherein the whole human bodies interact with articulated objects, whose parts are connected by movable joints. We present Capturing Human and Articulated-object InteRactionS (CHAIRS), a large-scale motion-captured f-AHOI dataset, consisting of 17.3 hours of versatile interactions between 46 participants and 81 articulated and rigid sittable objects. CHAIRS provides 3D meshes of both humans and articulated objects during the entire interactive process, as well as realistic and physically plausible full-body interactions. We show the value of CHAIRS with object pose estimation. By learning the geometrical relationships in HOI, we devise the first model that leverages human pose estimation to tackle the articulated object pose/shape estimation during whole-body interactions. Given an image and an estimated human pose, our model reconstructs the object pose/shape and optimizes the reconstruction according to a learned interaction prior. Under two evaluation settings, our model significantly outperforms baselines. We further demonstrate the value of CHAIRS with a downstream task of generating interacting human poses conditioned on articulated objects. We hope CHAIRS will promote the community towards finer-grained interaction understanding. Data/code will be made publicly available.
Related Material