Video Summarization by Learning From Unpaired Data

Mrigank Rochan, Yang Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7902-7911

Abstract


We consider the problem of video summarization. Given an input raw video, the goal is to select a small subset of key frames from the input video to create a shorter summary video that best describes the content of the original video. Most of the current state-of-the-art video summarization approaches use supervised learning and require labeled training data. Each training instance consists of a raw input video and its ground truth summary video curated by human annotators. However, it is very expensive and difficult to create such labeled training examples. To address this limitation, we propose a novel formulation to learn video summarization from unpaired data. We present an approach that learns to generate optimal video summaries using a set of raw videos (V) and a set of summary videos (S), where there exists no correspondence between V and S. We argue that this type of data is much easier to collect. Our model aims to learn a mapping function F : V -> S such that the distribution of resultant summary videos from F(V) is similar to the distribution of S with the help of an adversarial objective. In addition, we enforce a diversity constraint on F(V) to ensure that the generated video summaries are visually diverse. Experimental results on two benchmark datasets indicate that our proposed approach significantly outperforms other alternative methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Rochan_2019_CVPR,
author = {Rochan, Mrigank and Wang, Yang},
title = {Video Summarization by Learning From Unpaired Data},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}