ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture

Sena Kiciroglu, Helge Rhodin, Sudipta N. Sinha, Mathieu Salzmann, Pascal Fua; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 103-112

Abstract


The accuracy of monocular 3D human pose estimation depends on the viewpoint from which the image is captured. While freely moving cameras, such as on drones, provide control over this viewpoint, automatically positioning them at the location which will yield the highest accuracy remains an open problem. This is the problem that we address in this paper. Specifically, given a short video sequence, we introduce an algorithm that predicts which viewpoints should be chosen to capture future frames so as to maximize 3D human pose estimation accuracy. The key idea underlying our approach is a method to estimate the uncertainty of the 3D body pose estimates. We integrate several sources of uncertainty, originating from deep learning based regressors and temporal smoothness. Our motion planner yields improved 3D body pose estimates and outperforms or matches existing ones that are based on person following and orbiting.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kiciroglu_2020_CVPR,
author = {Kiciroglu, Sena and Rhodin, Helge and Sinha, Sudipta N. and Salzmann, Mathieu and Fua, Pascal},
title = {ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}