Ditto: Building Digital Twins of Articulated Objects From Interaction

Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 5616-5626

Abstract


Digitizing physical objects into the virtual world has the potential to unlock new research and applications in embodied AI and mixed reality. This work focuses on recreating interactive digital twins of real-world articulated objects, which can be directly imported into virtual environments. We introduce Ditto to learn articulation model estimation and 3D geometry reconstruction of an articulated object through interactive perception. Given a pair of visual observations of an articulated object before and after interaction, Ditto reconstructs part-level geometry and estimates the articulation model of the object. We employ implicit neural representations for joint geometry and articulation modeling. Our experiments show that Ditto effectively builds digital twins of articulated objects in a category-agnostic way. We also apply Ditto to real-world objects and deploy the recreated digital twins in physical simulation. Code and additional results are available at https://ut-austin-rpl.github.io/Ditto/

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jiang_2022_CVPR, author = {Jiang, Zhenyu and Hsu, Cheng-Chun and Zhu, Yuke}, title = {Ditto: Building Digital Twins of Articulated Objects From Interaction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {5616-5626} }