DoubleFusion: Real-Time Capture of Human Performances With Inner Body Shapes From a Single Depth Sensor

Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, Yebin Liu; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7287-7296

Abstract


We propose DoubleFusion, a new real-time system that combines volumetric dynamic reconstruction with data-driven template fitting to simultaneously reconstruct detailed geometry, non-rigid motion and the inner human body shape from a single depth camera. One of the key contributions of this method is a double layer representation consisting of a complete parametric body shape inside and a gradually fused outer surface layer. A pre-defined node graph on the body surface parameterizes the non-rigid deformations near the body and a free-form dynamically changing graph parameterizes the outer surface layer far from the body allowing more general reconstruction. We further propose a joint motion tracking method based on the double layer representation to enable robust and fast motion tracking performance. Moreover, the inner body shape is optimized online and forced to fit inside the outer surface layer. Overall, our method enables increasingly denoised, detailed and complete surface reconstructions, fast motion tracking performance and plausible inner body shape reconstruction in real-time. In particular, experiments show improved fast motion tracking and loop closure performance on more challenging scenarios.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yu_2018_CVPR,
author = {Yu, Tao and Zheng, Zerong and Guo, Kaiwen and Zhao, Jianhui and Dai, Qionghai and Li, Hao and Pons-Moll, Gerard and Liu, Yebin},
title = {DoubleFusion: Real-Time Capture of Human Performances With Inner Body Shapes From a Single Depth Sensor},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}