A Prototype-Oriented Contrastive Adaption Network For Cross-domain Facial Expression Recognition

Chao Wang, Jundi Ding, Hui Yan, Si Shen; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 4194-4210

Abstract


Numerous well-performing facial expression recognition algorithms suffer from severe slippage when trained on one dataset and tested on another, due to inconsistencies in facial expression datasets caused by different acquisition conditions and subjective biases of annotators. In order to improve the generalization ability of the model, in this paper we propose a simple but effective Prototype-Oriented Contrastive Adaptation Network (POCAN) unified contrastive learning and prototype networks for cross-domain facial expression recognition. We employ a two-stage training pipeline. Specifically, in the first stage, we pre-train on the source domain to obtain semantically meaningful features and obtain good initial conditions for the target domain. In the second stage, we perform intra-domain feature learning and inter-domain feature fusion by narrowing the distance between samples and their corresponding prototypes and widening the distance with other prototypes, and we also use an adversarial loss function for domain-level alignment. In addition, we also consider the problem of data category imbalance, and category weights are introduced into our method so that the categories of the two domains are in a uniform distribution. Extensive experiments show that our method can yield competitive performance on both lab-controlled and in-the-wild datasets.

Related Material


[pdf] [code]
[bibtex]
@InProceedings{Wang_2022_ACCV, author = {Wang, Chao and Ding, Jundi and Yan, Hui and Shen, Si}, title = {A Prototype-Oriented Contrastive Adaption Network For Cross-domain Facial Expression Recognition}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {4194-4210} }