Object Pose Estimation via the Aggregation of Diffusion Features

Tianfu Wang, Guosheng Hu, Hongguang Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 10238-10247

Abstract


Estimating the pose of objects from images is a crucial task of 3D scene understanding and recent approaches have shown promising results on very large benchmarks. However these methods experience a significant performance drop when dealing with unseen objects. We believe that it results from the limited generalizability of image features. To address this problem we have an in-depth analysis on the features of diffusion models e.g. Stable Diffusion which hold substantial potential for modeling unseen objects. Based on this analysis we then innovatively introduce these diffusion features for object pose estimation. To achieve this we propose three distinct architectures that can effectively capture and aggregate diffusion features of different granularity greatly improving the generalizability of object pose estimation. Our approach outperforms the state-of-the-art methods by a considerable margin on three popular benchmark datasets LM O-LM and T-LESS. In particular our method achieves higher accuracy than the previous best arts on unseen objects: 98.2% vs. 93.5% on Unseen LM 85.9% vs. 76.3% on Unseen O-LM showing the strong generalizability of our method. Our code is released at https://github.com/Tianfu18/diff-feats-pose.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Tianfu and Hu, Guosheng and Wang, Hongguang}, title = {Object Pose Estimation via the Aggregation of Diffusion Features}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {10238-10247} }