RDPN6D: Residual-based Dense Point-wise Network for 6Dof Object Pose Estimation Based on RGB-D Images

Zong-Wei Hong, Yen-Yang Hung, Chu-Song Chen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 5251-5260

Abstract


In this work we introduce a novel method for calculating the 6DoF pose of an object using a single RGB-D image. Unlike existing methods that either directly predict objects' poses or rely on sparse keypoints for pose recovery our approach addresses this challenging task using dense correspondence i.e. we regress the object coordinates for each visible pixel. Our method leverages existing object detection methods. We incorporate a re-projection mechanism to adjust the camera's intrinsic matrix to accommodate cropping in RGB-D images. Moreover we transform the 3D object coordinates into a residual representation which can effectively reduce the output space and yield superior performance. We conducted extensive experiments to validate the efficacy of our approach for 6D pose estimation. Our approach outperforms most previous methods especially in occlusion scenarios and demonstrates notable improvements over the state-of-the-art methods. Our code is available on github.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Hong_2024_CVPR, author = {Hong, Zong-Wei and Hung, Yen-Yang and Chen, Chu-Song}, title = {RDPN6D: Residual-based Dense Point-wise Network for 6Dof Object Pose Estimation Based on RGB-D Images}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {5251-5260} }