Toward Human-Like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand

Tianqiang Zhu, Rina Wu, Xiangbo Lin, Yi Sun; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15741-15751

Abstract


In recent years, many dexterous robotic hands have been designed to assist or replace human hands in executing various tasks. But how to teach them to perform dexterous operations like human hands is still a challenging task. In this paper, we propose a grasp synthesis framework to make robots grasp and manipulate objects like human beings. We first build a dataset by accurately segmenting the functional areas of the object and annotating semantic touch code for each functional area to guide the dexterous hand to complete the functional grasp and post-grasp manipulation. This dataset contains 18 categories of 129 objects selected from four datasets, and 15 people participated in data annotation. Then we carefully design four loss functions to constrain the network, which successfully generates the functional grasp of dexterous hand under the guidance of semantic touch code. The thorough experiments in synthetic data show our model can robustly generate functional grasp, even for objects that the model has not see before.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhu_2021_ICCV, author = {Zhu, Tianqiang and Wu, Rina and Lin, Xiangbo and Sun, Yi}, title = {Toward Human-Like Grasp: Dexterous Grasping via Semantic Representation of Object-Hand}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15741-15751} }