Action Unit Detection by Exploiting Spatial-Temporal and Label-Wise Attention With Transformer

Lingfeng Wang, Jin Qi, Jian Cheng, Kenji Suzuki; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 2470-2475

Abstract


The facial action units (FAU) defined by the Facial Action Coding System (FACS) has become an important approach of facial expression analysis. Most work on FAU detection only considers the spatial-temporal feature and ignores the label-wise AU correlation. In practice, the strong relationships between facial AUs can help AU detection. We proposed a transformer based FAU detection model by leverage both the local spatial-temporal features and label-wise FAU correlation. To be specific, we firstly designed a visual spatial-temporal transformer based model and a convolution based audio model to extract action unit specific features. Secondly, inspired by the relationship between FAUs, we proposed a transformer based correlation module to learn correlation between AUs. The action unit specific features from aural and visual models are further aggregated in the correlation modules to produce per-frame prediction of 12 AUs. Our model was trained on Aff-Wild2 dataset of the ABAW3 challenge and achieved state of art performance in the FAU task, which verified that the effectiveness of the proposed network.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2022_CVPR, author = {Wang, Lingfeng and Qi, Jin and Cheng, Jian and Suzuki, Kenji}, title = {Action Unit Detection by Exploiting Spatial-Temporal and Label-Wise Attention With Transformer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {2470-2475} }