Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion

Elena Ryumina, Maxim Markitantov, Dmitry Ryumin, Heysem Kaya, Alexey Karpov; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4752-4760

Abstract


A Compound Expression Recognition (CER) as a subfield of affective computing is a novel task in intelligent human-computer interaction and multimodal user interfaces. We propose a novel audio-visual method for CER. Our method relies on emotion recognition models that fuse modalities at the emotion probability level while decisions regarding the prediction of compound expressions are based on the pair-wise sum of weighted emotion probability distributions. Notably our method does not use any training data specific to the target task. Thus the problem is a zero-shot classification task. The method is evaluated in multi-corpus training and cross-corpus validation setups. We achieved F1 scores of 32.15% and 25.56% for the AffWild2 and C-EXPR-DB test subsets without training on target corpus and target task respectively. Therefore our method is on par with methods developed training target corpus or target task. The source code is publicly available from https://elenaryumina.github.io/AVCER.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ryumina_2024_CVPR, author = {Ryumina, Elena and Markitantov, Maxim and Ryumin, Dmitry and Kaya, Heysem and Karpov, Alexey}, title = {Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4752-4760} }