Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels

Dexter Neo, Tsuhan Chen, Stefan Winkler; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 5692-5700

Abstract


Building models for human facial expression recognition (FER) is made difficult by subjective, ambiguous and noisy annotations. This is especially true when assigning a single emotion class label to facial expressions for large in-the-wild FER datasets. Human facial expressions often contain a mixture of different mental states, which exacerbates the problem of single labels when used to categorize emotions. Dimensional models of affect - such as those using valence and arousal - provide significant advantages over categorical models in terms of representing human emotional states but have remained relatively under-explored. In this paper, we propose an approach for dual-domain affect fusion which investigates the relationships between discrete emotion classes and their continuous representations. In order to address the underlying uncertainty of the labels, we formulate a set of mixed labels via a dual-domain label fusion module to exploit these intrinsic relationships. Finally, we show the benefits of the proposed approach using AffectNet, Aff-Wild, and MorphSet, in the presence of natural and synthetic noise.

Related Material


[pdf]
[bibtex]
@InProceedings{Neo_2023_CVPR, author = {Neo, Dexter and Chen, Tsuhan and Winkler, Stefan}, title = {Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {5692-5700} }