Egocentric Activity Recognition on a Budget

Rafael Possas, Sheila Pinto Caceres, Fabio Ramos; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5967-5976

Abstract


Recent advances in embedded technology have enabled more pervasive machine learning. One of the common applications in this field is Egocentric Activity Recognition (EAR), where users wearing a device such as a smartphone or smartglasses are able to receive feedback from the embedded device. Recent research on activity recognition has mainly focused on improving accuracy by using resource intensive techniques such as multi-stream deep networks. Although this approach has provided state-of-the-art results, in most cases it neglects the natural resource constraints (e.g. battery) of wearable devices. We develop a Reinforcement Learning model-free method to learn energy-aware policies that maximize the use of low-energy cost predictors while keeping competitive accuracy levels. Our results show that a policy trained on an egocentric dataset is able use the synergy between motion sensors and vision to effectively tradeoff energy expenditure and accuracy on smartglasses operating in realistic, real-world conditions.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Possas_2018_CVPR,
author = {Possas, Rafael and Caceres, Sheila Pinto and Ramos, Fabio},
title = {Egocentric Activity Recognition on a Budget},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}