Muscles in Action

Mia Chiquier, Carl Vondrick; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 22091-22101

Abstract


Human motion is created by, and constrained by, our muscles. We take a first step at building computer vision methods that represent the internal muscle activity that causes motion. We present a new dataset, Muscles in Action (MIA), to to learn to incorporate muscle activity into human motion representations. The dataset consists of 12.5 hours of synchronized video and surface electromyography (sEMG) data of 10 subjects performing various exercises. Using this dataset, we learn a bidirectional representation that predicts muscle activation from video, and conversely, reconstructs motion from muscle activation. We evaluate our model on in-distribution subjects and exercises, as well as on out-of-distribution subjects and exercises. We demonstrate how advances in modeling both modalities jointly can serve as conditioning for muscularly consistent motion generation. Putting muscles into computer vision systems will enable richer models of virtual humans, with applications in sports, fitness, and AR/VR.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chiquier_2023_ICCV, author = {Chiquier, Mia and Vondrick, Carl}, title = {Muscles in Action}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {22091-22101} }