How You Feelin'? Learning Emotions and Mental States in Movie Scenes

Dhruv Srivastava, Aditya Kumar Singh, Makarand Tapaswi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 2517-2528

Abstract


Movie story analysis requires understanding characters' emotions and mental states. Towards this goal, we formulate emotion understanding as predicting a diverse and multi-label set of emotions at the level of a movie scene and for each character. We propose EmoTx, a multimodal Transformer-based architecture that ingests videos, multiple characters, and dialog utterances to make joint predictions. By leveraging annotations from the MovieGraphs dataset, we aim to predict classic emotions (e.g. happy, angry) and other mental states (e.g. honest, helpful). We conduct experiments on the most frequently occurring 10 and 25 labels, and a mapping that clusters 181 labels to 26. Ablation studies and comparison against adapted state-of-the-art emotion recognition approaches shows the effectiveness of EmoTx. Analyzing EmoTx's self-attention scores reveals that expressive emotions often look at character tokens while other mental states rely on video and dialog cues.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Srivastava_2023_CVPR, author = {Srivastava, Dhruv and Singh, Aditya Kumar and Tapaswi, Makarand}, title = {How You Feelin'? Learning Emotions and Mental States in Movie Scenes}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {2517-2528} }