-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhao_2025_WACV, author = {Zhao, Zengqun and Cao, Yu and Gong, Shaogang and Patras, Ioannis}, title = {Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge Transfer}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {815-824} }
Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge Transfer
Abstract
Current facial expression recognition (FER) models are often designed in a supervised learning manner and thus are constrained by the lack of large-scale facial expression images with high-quality annotations. Consequently these models often fail to generalize well performing poorly on unseen images in inference. Vision-language-based zero-shot models demonstrate a promising potential for addressing such challenges. However these models lack task-specific knowledge and therefore are not optimized for the nuances of recognizing facial expressions. To bridge this gap this work proposes a novel method Exp-CLIP to enhance zero-shot FER by transferring the task knowledge from large language models (LLMs). Specifically based on the pre-trained vision-language encoders we incorporate a projection head designed to map the initial joint vision-language space into a space that captures representations of facial actions. To train this projection head for subsequent zero-shot predictions we propose to align the projected visual representations with task-specific semantic meanings derived from the LLM encoder and the text instruction-based strategy is employed to customize the LLM knowledge. Given unlabelled facial data and efficient training of the projection head Exp-CLIP achieves superior zero-shot results to the CLIP models and several other large vision-language models (LVLMs) on seven in-the-wild FER datasets.
Related Material