Understanding and Mitigating Annotation Bias in Facial Expression Recognition

Yunliang Chen, Jungseock Joo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14980-14991

Abstract


The performance of a computer vision model depends on the size and quality of its training data. Recent studies have unveiled previously-unknown composition biases in common image datasets which then lead to skewed model outputs, and have proposed methods to mitigate these biases. However, most existing works assume that human-generated annotations can be considered gold-standard and unbiased. In this paper, we reveal that this assumption can be problematic, and that special care should be taken to prevent models from learning such annotation biases. We focus on facial expression recognition and compare the label biases between lab-controlled and in-the-wild datasets. We demonstrate that many expression datasets contain significant annotation biases between genders, especially when it comes to the happy and angry expressions, and that traditional methods cannot fully mitigate such biases in trained models. To remove expression annotation bias, we propose an AU-Calibrated Facial Expression Recognition (AUC-FER) framework that utilizes facial action units (AUs) and incorporates the triplet loss into the objective function. Experimental results suggest that the proposed method is more effective in removing expression annotation bias than existing techniques.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2021_ICCV, author = {Chen, Yunliang and Joo, Jungseock}, title = {Understanding and Mitigating Annotation Bias in Facial Expression Recognition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14980-14991} }