Unsupervised Features for Facial Expression Intensity Estimation Over Time

Maren Awiszus, Stella Grasshof, Felix Kuhnke, Jorn Ostermann; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1086-1094

Abstract


The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Awiszus_2018_CVPR_Workshops,
author = {Awiszus, Maren and Grasshof, Stella and Kuhnke, Felix and Ostermann, Jorn},
title = {Unsupervised Features for Facial Expression Intensity Estimation Over Time},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}