CullNet: Calibrated and Pose Aware Confidence Scores for Object Pose Estimation

Kartik Gupta, Lars Petersson, Richard Hartley; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


We present a new approach for single view, image-based object pose estimation in real time. Specifically, the problem of culling false positives among several pose proposal estimates is addressed in this paper. Our proposed approach targets the problem of inaccurate confidence values predicted by CNNs which is used by many current methods to choose a final object pose prediction. We present a new network called CullNet, solving this task. CullNet takes pairs of pose masks rendered from a 3D model, and cropped regions in the original image as input. This is then used to calibrate the confidence scores of the pose proposals. This new set of confidence scores is found to be significantly more reliable for accurate object pose estimation as shown by our results. Our experimental results on multiple challenging datasets (LINEMOD and Occlusion LINEMOD) clearly reflects the utility of our proposed method. Our overall pose estimation pipeline outperforms state-of-the-art object pose estimation methods on these standard object pose estimation datasets. The code is available at https://github.com/kartikgupta-at-ANU/CullNet.

Related Material


[pdf]
[bibtex]
@InProceedings{Gupta_2019_ICCV,
author = {Gupta, Kartik and Petersson, Lars and Hartley, Richard},
title = {CullNet: Calibrated and Pose Aware Confidence Scores for Object Pose Estimation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}