Multimodal 2D and 3D for In-The-Wild Facial Expression Recognition

Son Thai Ly, Nhu-Tai Do, Guee-Sang Lee, Soo-Hyung Kim, Hyung-Jeong Yang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0

Abstract


In this paper, unlike other in-the-wild facial expression recognition (FER) studies which only focused on 2D information, we present a fusion approach for 2D and 3D facial data in FER. In particular, the 3D facial data are first reconstructed from image datasets. The 3D information are then extracted by deep learning technique that could exploit the meaningful facial geometry details for expression. We further demonstrate the potential of using 3D facial data by taking the 2D projected images of 3D face as an additional input for FER. These features are fused with that of 2D features from a typical network. Following the experiment procedure in recent studies, the concatenated features are classified by linear support vector machines (SVMs). Comprehensive experiments are further conducted on integrating facial features for expression prediction. The results show that the proposed method achieves state-of-the-art recognition performances on both RAF database and SFEW 2.0 database. This is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.

Related Material


[pdf]
[bibtex]
@InProceedings{Ly_2019_CVPR_Workshops,
author = {Thai Ly, Son and Do, Nhu-Tai and Lee, Guee-Sang and Kim, Soo-Hyung and Yang, Hyung-Jeong},
title = {Multimodal 2D and 3D for In-The-Wild Facial Expression Recognition},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}