Deep Learning Anthropomorphic 3D Point Clouds from a Single Depth Map Camera Viewpoint

Nolan Lunscher, John Zelek; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 689-696

Abstract


In footwear, fit is highly dependent on foot shape, which is not fully captured by shoe size. Scanners can be used to acquire better sizing information and allow for more personalized footwear matching, however when scanning an object, many images are usually needed for reconstruction. Semantics such as knowing the kind of object in view can be leveraged to determine the full 3D shape given only one input view. Deep learning methods have been shown to be able to reconstruct 3D shape from limited inputs in highly symmetrical objects such as furniture and vehicles. We apply a deep learning approach to the domain of foot scanning, and present a method to reconstruct a 3D point cloud from a single input depth map. Anthropomorphic body parts can be challenging due to their irregular shapes, difficulty for parameterizing and limited symmetries. We train a view synthesis based network and show that our method can produce foot scans with accuracies of 1.55 mm from a single input depth map.

Related Material


[pdf]
[bibtex]
@InProceedings{Lunscher_2017_ICCV,
author = {Lunscher, Nolan and Zelek, John},
title = {Deep Learning Anthropomorphic 3D Point Clouds from a Single Depth Map Camera Viewpoint},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}