Facial Affect Estimation in the Wild Using Deep Residual and Convolutional Networks

Behzad Hasani, Mohammad H. Mahoor; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 9-16

Abstract


Automated affective computing in the wild is a challenging task in the field of computer vision. This paper presents three neural network-based methods proposed for the task of facial affect estimation submitted to the First Affect-in-the-Wild challenge. These methods are based on Inception-ResNet modules redesigned specifically for the task of facial affect estimation. These methods are: Shallow Inception-ResNet, Deep Inception-ResNet, and Inception-ResNet with LSTMs. These networks extract facial features in different scales and simultaneously estimate both the valence and arousal in each frame. Root Mean Square Error (RMSE) rates of 0.4 and 0.3 are achieved for the valence and arousal respectively with corresponding Concordance Correlation Coefficient (CCC) rates of 0.04 and 0.29 using Deep Inception-ResNet method.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Hasani_2017_CVPR_Workshops,
author = {Hasani, Behzad and Mahoor, Mohammad H.},
title = {Facial Affect Estimation in the Wild Using Deep Residual and Convolutional Networks},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}