Real-Time Body Tracking with One Depth Camera and Inertial Sensors
Thomas Helten, Meinard Muller, Hans-Peter Seidel, Christian Theobalt; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1105-1112
Abstract
In recent years, the availability of inexpensive depth cameras, such as the Microsoft Kinect, has boosted the research in monocular full body skeletal pose tracking. Unfortunately, existing trackers often fail to capture poses where a single camera provides insufficient data, such as non-frontal poses, and all other poses with body part occlusions. In this paper, we present a novel sensor fusion approach for real-time full body tracking that succeeds in such difficult situations. It takes inspiration from previous tracking solutions, and combines a generative tracker and a discriminative tracker retrieving closest poses in a database. In contrast to previous work, both trackers employ data from a low number of inexpensive body-worn inertial sensors. These sensors provide reliable and complementary information when the monocular depth information alone is not sufficient. We also contribute by new algorithmic solutions to best fuse depth and inertial data in both trackers. One is a new visibility model to determine global body pose, occlusions and usable depth correspondences and to decide what data modality to use for discriminative tracking. We also contribute with a new inertial-based pose retrieval, and an adapted late fusion step to calculate the final body pose.
Related Material
[pdf]
[
bibtex]
@InProceedings{Helten_2013_ICCV,
author = {Helten, Thomas and Muller, Meinard and Seidel, Hans-Peter and Theobalt, Christian},
title = {Real-Time Body Tracking with One Depth Camera and Inertial Sensors},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}