Applying Action Attribute Class Validation to Improve Human Activity Recognition

David Tahmoush; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015, pp. 15-21

Abstract


When learning a new classifier, poor quality training data can significantly degrade performance. Applying selection conditions to the training data can prevent mislabeled, noisy, or damaged data from skewing the classifier. We extend a set of action attributes and apply training case attribute selection conditions to a challenging action recognition dataset. Short-range 3D imagers produce three-dimensional point cloud movies which can be analyzed for structure and motion information like actions. We skeletonize the human point cloud to try to estimate the joint motion, and this produces a significant number of errors as well as damaged and misrepresented cases. By selectively pruning the training cases using the extended action attributes, we improve the classifier performance on some classes by over 5% and improve on the state-of-the-art from 85% accuracy to over 88%. In addition, discovering attribute inconsistencies in the subject actions has provided a reason behind the consistently disappointing performance of multiple algorithms upon the same data.

Related Material


[pdf]
[bibtex]
@InProceedings{Tahmoush_2015_CVPR_Workshops,
author = {Tahmoush, David},
title = {Applying Action Attribute Class Validation to Improve Human Activity Recognition},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2015}
}