APE-V: Athlete Performance Evaluation Using Video

Chaitanya Roygaga, Dhruva Patil, Michael Boyle, William Pickard, Raoul Reiser, Aparna Bharati, Nathaniel Blanchard; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, 2022, pp. 691-700

Abstract


Athletes typically undergo regular evaluations by trainers and coaches to assess performance and injury risk. One of the most popular movements to examine in athletes needing lower extremity strength and power is the vertical jump. Specifically, maximal effort countermovement and drop jumps performed on bilateral force plates provide a wealth of metrics. However, the expense of the equipment and expertise needed to interpret the results limits their use. Computer vision techniques applied to videos of such movements are a less expensive alternative for extracting complementary metrics. Blanchard et al. collected a dataset of 89 athletes performing these movements and showcased how OpenPose could be applied to the data. However, athlete error calls into question 46.2% of movements --- in these cases, an expert assessor would have the athlete redo the movement to eliminate the error. Here, we augmented Blanchard et al. with expert labels of error and established benchmark performance on automatic error identification. In total, 14 different types of errors were identified by trained annotators. Our benchmark models identified errors with an F1 score of 0.710 and a Kappa of 0.457 (Kappa measures accuracy over chance). All code and augmented labels can be found at https://blanchard-lab.github.io/apev.github.io/.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Roygaga_2022_WACV, author = {Roygaga, Chaitanya and Patil, Dhruva and Boyle, Michael and Pickard, William and Reiser, Raoul and Bharati, Aparna and Blanchard, Nathaniel}, title = {APE-V: Athlete Performance Evaluation Using Video}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2022}, pages = {691-700} }