A 3D Morphable Model of Craniofacial Shape and Texture Variation

Hang Dai, Nick Pears, William A. P. Smith, Christian Duncan; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3085-3093

Abstract


We present a fully automatic pipeline to train 3D Morphable Models (3DMMs), with contributions in pose normalisation, dense correspondence using both shape and texture information, and high quality, high resolution texture mapping. We propose a dense correspondence system, combining a hierarchical parts-based template morphing framework in the shape channel and a refining optical flow in the texture channel. The texture map is generated using raw texture images from five views. We employ a pixel-embedding method to maintain the texture map at the same high resolution as the raw texture images, rather than using per-vertex color maps. The high quality texture map is then used for statistical texture modelling. The Headspace dataset used for training includes demographic information about each subject, allowing for the construction of both global 3DMMs and models tailored for specific gender and age groups. We build both global craniofacial 3DMMs and demographic sub-population 3DMMs from more than 1200 distinct identities. To our knowledge, we present the first public 3DMM of the full human head in both shape and texture: the Liverpool-York Head Model. Furthermore, we analyse the 3DMMs in terms of a range of performance metrics. Our evaluations reveal that the training pipeline constructs state-of-the-art models.

Related Material


[pdf]
[bibtex]
@InProceedings{Dai_2017_ICCV,
author = {Dai, Hang and Pears, Nick and Smith, William A. P. and Duncan, Christian},
title = {A 3D Morphable Model of Craniofacial Shape and Texture Variation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}