Causal Affect Prediction Model Using a Past Facial Image Sequence

Geesung Oh, Euiseok Jeong, Sejoon Lim; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 3550-3556

Abstract


Among human affective behavior research, facial expression recognition research is improving in performance along with the development of deep learning. For improved performance, not only past images but also future images should be used along with corresponding facial images, but there are obstacles to the application of this technique to real-time environments. In this paper, we propose the causal affect prediction network (CAPNet), which uses only past facial images to predict corresponding affective valence and arousal. We train CAPNet to learn causal inference between past images and corresponding affective valence and arousal through supervised learning by pairing the sequence of past images with the current label using the Aff-Wild2 dataset. We show through experiments that the well-trained CAPNet outperforms the baseline of the second challenge of the Affective Behavior Analysis in-the-wild (ABAW2) Competition by predicting affective valence and arousal only with past facial images one-third of a second earlier. Therefore, in real-time application, CAPNet can reliably predict affective valence and arousal only with past data.

Related Material


[pdf]
[bibtex]
@InProceedings{Oh_2021_ICCV, author = {Oh, Geesung and Jeong, Euiseok and Lim, Sejoon}, title = {Causal Affect Prediction Model Using a Past Facial Image Sequence}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {3550-3556} }