A Compact Embedding for Facial Expression Similarity

Raviteja Vemulapalli, Aseem Agarwala; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 5683-5692

Abstract


Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection. However, facial expressions do not always fall neatly into pre-defined semantic categories. Also, the similarity between expressions measured in the action unit space need not correspond to how humans perceive expression similarity. Different from previous work, our goal is to describe facial expressions in a continuous fashion using a compact embedding space that mimics human visual preferences. To achieve this goal, we collect a large-scale faces-in-the-wild dataset with human annotations in the form: Expressions A and B are visually more similar when compared to expression C, and use this dataset to train a neural network that produces a compact (16-dimensional) expression embedding. We experimentally demonstrate that the learned embedding can be successfully used for various applications such as expression retrieval, photo album summarization, and emotion recognition. We also show that the embedding learned using the proposed dataset performs better than several other embeddings learned using existing emotion or action unit datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Vemulapalli_2019_CVPR,
author = {Vemulapalli, Raviteja and Agarwala, Aseem},
title = {A Compact Embedding for Facial Expression Similarity},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}