Point Cloud Completion of Foot Shape From a Single Depth Map for Fit Matching Using Deep Learning View Synthesis

Nolan Lunscher, John Zelek; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2300-2305

Abstract


In clothing and particularly in footwear, the variance in the size and shape of people and of clothing poses a problem of how to match items of clothing to a person. 3D scanning can be used to determine detailed personalized shape information, which can then be used to match against clothing shape. In current implementations however, this process is typically expensive and cumbersome. Ideally, in order to reduce the cost and complexity of scanning systems as much as possible, only a single image from a single camera would be needed. To this end, we focus on simplifying the process of scanning a person's foot for use in virtual footwear fitting. We use a deep learning approach to allow for whole foot shape reconstruction from a single input depth map view by synthesizing a view containing the remaining information about the foot not seen from the input. Our method directly adds information to the input view, and does not require any additional steps for point cloud alignment. We show that our method is capable of synthesizing the remainder of a point cloud with accuracies of 2.92+-0.72 mm.

Related Material


[pdf]
[bibtex]
@InProceedings{Lunscher_2017_ICCV,
author = {Lunscher, Nolan and Zelek, John},
title = {Point Cloud Completion of Foot Shape From a Single Depth Map for Fit Matching Using Deep Learning View Synthesis},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}