Recognizing Personal Contexts From Egocentric Images

Antonino Furnari, Giovanni M. Farinella, Sebastiano Battiato; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2015, pp. 1-9


Wearable cameras can gather first-person images of the environment, opening new opportunities for the development of systems able to assist the users in their daily life. This paper studies the problem of recognizing personal contexts from images acquired by wearable devices, which finds useful applications in daily routine analysis and stress monitoring. To assess the influence of different device-specific features, such as the Field Of View and the wearing modality, a dataset of five personal contexts is acquired using four different devices. We propose a benchmark classification pipeline which combines a one-class classifier to detect the negative samples (i.e., images not representing any of the personal contexts under analysis) with a classic one-vs-one multi-class classifier to discriminate among the contexts. Several experiments are designed to compare the performances of many state-of-the-art representations for object and scene classification when used with data acquired by different wearable devices.

Related Material

author = {Furnari, Antonino and Farinella, Giovanni M. and Battiato, Sebastiano},
title = {Recognizing Personal Contexts From Egocentric Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {December},
year = {2015}