STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos

Anshul Shah, Benjamin Lundell, Harpreet Sawhney, Rama Chellappa; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 10375-10387

Abstract


We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We propose a training objective, Bootstrapped Multi-Cue Contrastive (BMC2) loss to learn discriminative representations for various steps without any labels. Different from prior works, we develop techniques to train a light-weight temporal module which uses off-the-shelf features for self supervision. Our approach can seamlessly leverage information from multiple cues like optical flow, depth or gaze to learn discriminative features for key-steps, making it amenable for AR applications. We finally extract key steps via a tunable algorithm that clusters the representations and samples. We show significant improvements over prior works for the task of key step localization and phase classification. Qualitative results demonstrate that the extracted key steps are meaningful and succinctly represent various steps of the procedural tasks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Shah_2023_ICCV, author = {Shah, Anshul and Lundell, Benjamin and Sawhney, Harpreet and Chellappa, Rama}, title = {STEPs: Self-Supervised Key Step Extraction and Localization from Unlabeled Procedural Videos}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {10375-10387} }