-
[pdf]
[supp]
[bibtex]@InProceedings{Song_2025_WACV, author = {Song, Yehun and Cho, Sunyoung}, title = {Leveraging CLIP Encoder for Multimodal Emotion Recognition}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6115-6124} }
Leveraging CLIP Encoder for Multimodal Emotion Recognition
Abstract
Multimodal emotion recognition (MER) aims to identify human emotions by combining data from various modalities such as language audio and vision. Despite the recent advances of MER approaches the limitations in obtaining extensive datasets impede the improvement of performance. To mitigate this issue we leverage a Contrastive Language-Image Pre-training (CLIP)-based architecture and its semantic knowledge from massive datasets that aims to enhance the discriminative multimodal representation. We propose a label encoder-guided MER framework based on CLIP (MER-CLIP) to learn emotion-related representations across modalities. Our approach introduces a label encoder that treats labels as text embeddings to incorporate their semantic information leading to the learning of more representative emotional features. To further exploit label semantics we devise a cross-modal decoder that aligns each modality to a shared embedding space by sequentially fusing modality features based on emotion-related input from the label encoder. Finally the label encoder-guided prediction enables generalization across diverse labels by embedding their semantic information as well as word labels. Experimental results show that our method outperforms the state-of-the-art MER methods on the benchmark datasets CMU-MOSI and CMU-MOSEI.
Related Material