Capturing Global Semantic Relationships for Facial Action Unit Recognition
Ziheng Wang, Yongqiang Li, Shangfei Wang, Qiang Ji; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 3304-3311
Abstract
In this paper we tackle the problem of facial action unit (AU) recognition by exploiting the complex semantic relationships among AUs, which carry crucial top-down information yet have not been thoroughly exploited. Towards this goal, we build a hierarchical model that combines the bottom-level image features and the top-level AU relationships to jointly recognize AUs in a principled manner. The proposed model has two major advantages over existing methods. 1) Unlike methods that can only capture local pair-wise AU dependencies, our model is developed upon the restricted Boltzmann machine and therefore can exploit the global relationships among AUs. 2) Although AU relationships are influenced by many related factors such as facial expressions, these factors are generally ignored by the current methods. Our model, however, can successfully capture them to more accurately characterize the AU relationships. Efficient learning and inference algorithms of the proposed model are also developed. Experimental results on benchmark databases demonstrate the effectiveness of the proposed approach in modelling complex AU relationships as well as its superior AU recognition performance over existing approaches.
Related Material
[pdf]
[
bibtex]
@InProceedings{Wang_2013_ICCV,
author = {Wang, Ziheng and Li, Yongqiang and Wang, Shangfei and Ji, Qiang},
title = {Capturing Global Semantic Relationships for Facial Action Unit Recognition},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}