Temporal Distance Matrices for Squat Classification

Ryoji Ogata, Edgar Simo-Serra, Satoshi Iizuka, Hiroshi Ishikawa; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0

Abstract


When working out, it is necessary to perform the same action many times for it to have effect. If the action, such as squats or bench pressing, is performed with poor form, it can lead to serious injuries in the long term. For this purpose, we present an action dataset of squats where different types of poor form have been annotated with a diversity of users and backgrounds, and propose a model, based on temporal distance matrices, for the classification task. We first run a 3D pose detector, then we normalize the pose and compute the distance matrix, in which each element represents the distance between two joints. This representation is invariant to differences in individuals, global translation, and global rotation, allowing for high generalization to real world data. Our classification model consists of a CNN with 1D convolutions. Results show that our method significantly outperforms existing approaches for the task.

Related Material


[pdf]
[bibtex]
@InProceedings{Ogata_2019_CVPR_Workshops,
author = {Ogata, Ryoji and Simo-Serra, Edgar and Iizuka, Satoshi and Ishikawa, Hiroshi},
title = {Temporal Distance Matrices for Squat Classification},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}