Unsupervised Procedure Learning via Joint Dynamic Summarization

Ehsan Elhamifar, Zwe Naing; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6341-6350

Abstract


We address the problem of unsupervised procedure learning from unconstrained instructional videos. Our goal is to produce a summary of the procedure key-steps and their ordering needed to perform a given task, as well as localization of the key-steps in videos. We develop a collaborative sequential subset selection framework, where we build a dynamic model on videos by learning states and transitions between them, where states correspond to different subactivities, including background and procedure steps. To extract procedure key-steps, we develop an optimization framework that finds a sequence of a small number of states that well represents all videos and is compatible with the state transition model. Given that our proposed optimization is non-convex and NP-hard, we develop a fast greedy algorithm whose complexity is linear in the length of the videos and the number of states of the dynamic model, hence, scales to large datasets. Under appropriate conditions on the transition model, our proposed formulation is approximately submodular, hence, comes with performance guarantees. We also present ProceL, a new multimodal dataset of 47.3 hours of videos and their transcripts from diverse tasks, for procedure learning evaluation. By extensive experiments, we show that our framework significantly improves the state of the art performance.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Elhamifar_2019_ICCV,
author = {Elhamifar, Ehsan and Naing, Zwe},
title = {Unsupervised Procedure Learning via Joint Dynamic Summarization},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}