Action-Affect-Gender Classification Using Multi-Task Representation Learning

Timothy J. Shields, Mohamed R. Amer, Max Ehrlich, Amir Tamrakar; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 1-10

Abstract


Recent work in affective computing focused on affect from facial expressions, and not as much on body. This work focuses on body affect. Affect does not occur in isolation. Humans usually couple affect with an action; for example, a person could be running and happy. Recognizing body affect in sequences requires efficient algorithms to capture both the micro movements that differentiate between happy and sad and the macro variations between different actions. We depart from traditional approaches for time-series data analytics by proposing a multi-task learning model that learns a shared representation that is well-suited for action-affect-gender classification. For this paper we choose a probabilistic model, specifically Conditional Restricted Boltzmann Machines, to be our building block. We propose a new model that enhances the CRBM model with a factored multi-task component that enables scaling over larger number of classes without increasing the number of parameters. We evaluate our approach on two publicly available datasets, the Body Affect dataset and the Tower Game dataset, and show superior classification performance improvement over the state-of-the-art.

Related Material


[pdf]
[bibtex]
@InProceedings{Shields_2017_CVPR_Workshops,
author = {Shields, Timothy J. and Amer, Mohamed R. and Ehrlich, Max and Tamrakar, Amir},
title = {Action-Affect-Gender Classification Using Multi-Task Representation Learning},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}