Allocentric Pose Estimation

M. Jose Antonio, Luc De Raedt, Tinne Tuytelaars; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 289-296

Abstract


The task of object pose estimation has been a challenge since the early days of computer vision. To estimate the pose (or viewpoint) of an object, people have mostly looked at object intrinsic features, such as shape or appearance. Surprisingly, informative features provided by other, external elements in the scene, have so far mostly been ignored. At the same time, contextual cues have been shown to be of great benefit for related tasks such as object detection or action recognition. In this paper, we explore how information from other objects in the scene can be exploited for pose estimation. In particular, we look at object configurations. We show that, starting from noisy object detections and pose estimates, exploiting the estimated pose and location of other objects in the scene can help to estimate the objects' poses more accurately. We explore both a camera-centered as well as an object-centered representation for relations. Experiments on the challenging KITTI dataset show that object configurations can indeed be used as a complementary cue to appearance-based pose estimation. In addition, object-centered relational representations can also assist object detection.

Related Material


[pdf]
[bibtex]
@InProceedings{Antonio_2013_ICCV,
author = {Antonio, M. Jose and De Raedt, Luc and Tuytelaars, Tinne},
title = {Allocentric Pose Estimation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}