First-Person Activity Forecasting With Online Inverse Reinforcement Learning

Nicholas Rhinehart, Kris M. Kitani; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3696-3705

Abstract


We address the problem of incrementally modeling and forecasting long-term goals of a first-person camera wearer: what the user will do, where they will go, and what goal they seek. In contrast to prior work in trajectory forecasting, our algorithm, Darko, goes further to reason about semantic states (will I pick up an object?), and future goal states that are far both in terms of space and time. Darko learns and forecasts from first-person visual observations of the user's daily behaviors via an Online Inverse Reinforcement Learning (IRL) approach. Classical IRL discovers only the rewards in a batch setting, whereas Darko discovers the states, transitions, rewards, and goals of a user from streaming data. Among other results, we show Darko forecasts goals better than competing methods in both noisy and ideal settings, and our approach is theoretically and empirically no-regret.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Rhinehart_2017_ICCV,
author = {Rhinehart, Nicholas and Kitani, Kris M.},
title = {First-Person Activity Forecasting With Online Inverse Reinforcement Learning},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}