Recognition of Action Units in the Wild With Deep Nets and a New Global-Local Loss

C. Fabian Benitez-Quiroz, Yan Wang, Aleix M. Martinez; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3970-3979

Abstract


Most previous algorithms for the recognition of Action Units (AUs) were trained on a small number of sample images. This was due to the limited amount of labeled data available at the time. This meant that data-hungry deep neural networks, which have shown their potential in other computer vision problems, could not be successfully trained to detect AUs. A recent publicly available database with close to a million labeled images has made this training possible. Image and individual variability (e.g., pose, scale, illumination, ethnicity) in this set is very large. Unfortunately, the labels in this dataset are not perfect (i.e., they are noisy), making convergence of deep nets difficult. To harness the richness of this dataset while being robust to the inaccuracies of the labels, we derive a novel global-local loss. This new loss function is shown to yield fast globally meaningful convergences and locally accurate results. Comparative results with those of the EmotioNet challenge demonstrate that our newly derived loss yields superior recognition of AUs than state-of-the-art algorithms.

Related Material


[pdf] [Supp]
[bibtex]
@InProceedings{Benitez-Quiroz_2017_ICCV,
author = {Fabian Benitez-Quiroz, C. and Wang, Yan and Martinez, Aleix M.},
title = {Recognition of Action Units in the Wild With Deep Nets and a New Global-Local Loss},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}