Guided Unsupervised Learning of Mode Specific Models for Facial Point Detection in the Wild

Shashank Jaiswal, Timur R. Almaev, Michel F. Valstar; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 370-377

Abstract


Facial landmark detection in real world images is a difficult problem due to the high degree of variation in pose, facial expression and illumination, and the presence of occlusions and background clutter. We propose a system that addresses the problem of head pose and facial expressions in a guided unsupervised learning approach to establish mode specific models. To detect 68 fiducial facial points we employ Local Evidence Aggregated Regression, in which local patches provide evidence of the location of the target facial point using Support Vector Regressors. We improve an earlier version of this approach by employing mode specific models and substituting the original Local Binary Pattern features with Local Gabor Binary Patterns. We show that by using specialised model selection we are capable of dealing with various head poses and facial expressions occurring in the wild without the need for manual annotation of pose and expression, and that our proposed detector performs significantly better than the current state of the art.

Related Material


[pdf]
[bibtex]
@InProceedings{Jaiswal_2013_ICCV_Workshops,
author = {Shashank Jaiswal and Timur R. Almaev and Michel F. Valstar},
title = {Guided Unsupervised Learning of Mode Specific Models for Facial Point Detection in the Wild},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}