Learning Dense Facial Correspondences in Unconstrained Images

Ronald Yu, Shunsuke Saito, Haoxiang Li, Duygu Ceylan, Hao Li; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4723-4732

Abstract


We present a minimalistic but effective neural network that computes dense facial correspondences in highly unconstrained RGB images. Our network learns a per-pixel flow and a matchability mask between 2D input photographs of a person and the projection of a textured 3D face model. To train such a network, we generate a massive dataset of synthetic faces with dense labels using renderings of a morphable face model with variations in pose, expressions, lighting, and occlusions. We found that a training refinement using real photographs is required to drastically improve the ability to handle real images. When combined with a facial detection and 3D face fitting step, we show that our approach outperforms the state-of-the-art face alignment methods in terms of accuracy and speed. By directly estimating dense correspondences, we do not rely on the full visibility of sparse facial landmarks and are not limited to the model space of regression-based approaches. We also assess our method on video frames and demonstrate successful per-frame processing under extreme pose variations, occlusions, and lighting conditions. Compared to existing 3D facial tracking techniques, our fitting does not rely on previous frames or frontal facial initialization and is robust to imperfect face detections.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yu_2017_ICCV,
author = {Yu, Ronald and Saito, Shunsuke and Li, Haoxiang and Ceylan, Duygu and Li, Hao},
title = {Learning Dense Facial Correspondences in Unconstrained Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}