Deformable 3D Fusion: From Partial Dynamic 3D Observations to Complete 4D Models

Weipeng Xu, Mathieu Salzmann, Yongtian Wang, Yue Liu; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2183-2191

Abstract


Capturing the 3D motion of dynamic, non-rigid objects has attracted significant attention in computer vision. Existing methods typically require either complete 3D volumetric observations, or a shape template. In this paper, we introduce a template-less 4D reconstruction method that incrementally fuses highly-incomplete 3D observations of a deforming object, and generates a complete, temporally-coherent shape representation of the object. To this end, we design an online algorithm that alternatively registers new observations to the current model estimate and updates the model. We demonstrate the effectiveness of our approach at reconstructing non-rigidly moving objects from highly-incomplete measurements on both sequences of partial 3D point clouds and Kinect videos.

Related Material


[pdf]
[bibtex]
@InProceedings{Xu_2015_ICCV,
author = {Xu, Weipeng and Salzmann, Mathieu and Wang, Yongtian and Liu, Yue},
title = {Deformable 3D Fusion: From Partial Dynamic 3D Observations to Complete 4D Models},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}