Deep Learning Whole Body Point Cloud Scans From a Single Depth Map

Nolan Lunscher, John Zelek; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1095-1102

Abstract


Personalized knowledge about body shape has numerous applications in fashion and clothing, as well as in health monitoring. Whole body 3D scanning presents a relatively simple mechanism for individuals to obtain this information about themselves without needing much knowledge of anthropometry. With current implementations however, scanning devices are large, complex and expensive. In order to make such systems as accessible and widespread as possible, it is necessary to simplify the process and reduce their hardware requirements. Deep learning models have emerged as the leading method of tackling visual tasks, including various aspects of 3D reconstruction. In this paper we demonstrate that by leveraging deep learning it is possible to create very simple whole body scanners that only require a single input depth map to operate. We show that our presented model is able to produce whole body point clouds with an accuracy of 5.19 mm.

Related Material


[pdf]
[bibtex]
@InProceedings{Lunscher_2018_CVPR_Workshops,
author = {Lunscher, Nolan and Zelek, John},
title = {Deep Learning Whole Body Point Cloud Scans From a Single Depth Map},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}