A Semantic Occlusion Model for Human Pose Estimation From a Single Depth Image

Umer Rafi, Juergen Gall, Bastian Leibe; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015, pp. 67-74

Abstract


Human pose estimation from depth data has made significant progress in recent years and commercial sensors estimate human poses in real-time. However, state-of-the-art methods fail in many situations when the humans are partially occluded by objects. In this work, we introduce a semantic occlusion model that is incorporated into a regression forest approach for human pose estimation from depth data. The approach exploits the context information of occluding objects like a table to predict the locations of occluded joints. In our experiments on real and synthetic data, we show that our occlusion model increases the joint estimation accuracy and outperforms the commercial Kinect 2 SDK for occluded joints.

Related Material


[pdf]
[bibtex]
@InProceedings{Rafi_2015_CVPR_Workshops,
author = {Rafi, Umer and Gall, Juergen and Leibe, Bastian},
title = {A Semantic Occlusion Model for Human Pose Estimation From a Single Depth Image},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2015}
}