Valence and Arousal Estimation Based on Multimodal Temporal-Aware Features for Videos in the Wild

Liyu Meng, Yuchen Liu, Xiaolong Liu, Zhaopei Huang, Wenqiang Jiang, Tenggan Zhang, Chuanhe Liu, Qin Jin; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 2345-2352

Abstract


This paper presents our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition. Based on multi-modal feature representations that fuse the visual and aural information, we utilize two types of temporal encoder to capture the temporal context information in the video, including the transformer based encoder and LSTM based encoder. With the temporal context-aware representations, we employ fully-connected layers to predict the valence and arousal values of the video frames. In addition, smoothing processing is applied to refine the initial predictions, and a model ensemble strategy is used to combine multiple results from different model setups. Our system achieves the performance in Concordance Correlation Coefficients (ccc) of 0.606 for valence, 0.602 for arousal, and mean ccc of 0.601, which ranks the first place in the challenge.

Related Material


[pdf]
[bibtex]
@InProceedings{Meng_2022_CVPR, author = {Meng, Liyu and Liu, Yuchen and Liu, Xiaolong and Huang, Zhaopei and Jiang, Wenqiang and Zhang, Tenggan and Liu, Chuanhe and Jin, Qin}, title = {Valence and Arousal Estimation Based on Multimodal Temporal-Aware Features for Videos in the Wild}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {2345-2352} }