Subject Adaptive Affection Recognition via Sparse Reconstruction

Chenyang Zhang, Yingli Tian; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2014, pp. 351-358

Abstract


Multimedia affection recognition from facial expressions and body gestures in RGB-D video sequences is a new research area. However, the large variance among different subjects, especially in facial expression, has made the problem more difficult. To address this issue, we propose a novel multimedia subject adaptive affection recognition framework via a 2-layer sparse representation. There are two main contributions in our framework. In the subjective adaption stage, an iterative subject selection algorithm is proposed to select most subject-related instances instead of using the whole training set. In the inference stage, a joint decision is made with confident reconstruction prior to composite information from facial expressions and body gestures. We also collect a new RGB-D dataset for affection recognition with large subjective variance. Experimental results demonstrate that the proposed affection recognition framework can increase the discriminative power especially for facial expressions. Joint recognition strategy is also demonstrated that it can utilize complementary information in both models so that to reach better recognition rate.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2014_CVPR_Workshops,
author = {Zhang, Chenyang and Tian, Yingli},
title = {Subject Adaptive Affection Recognition via Sparse Reconstruction},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2014}
}