Learning to Extract Flawless Slow Motion From Blurry Videos

Meiguang Jin, Zhe Hu, Paolo Favaro; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 8112-8121

Abstract


In this paper, we introduce the task of generating a sharp slow-motion video given a low frame rate blurry video. We propose a data-driven approach, where the training data is captured with a high frame rate camera and blurry images are simulated through an averaging process. While it is possible to train a neural network to recover the sharp frames from their average, there is no guarantee of the temporal smoothness for the formed video, as the frames are estimated independently. To address the temporal smoothness requirement we propose a system with two networks: One, DeblurNet, to predict sharp keyframes and the second, InterpNet, to predict intermediate frames between the generated keyframes. A smooth transition is ensured by interpolating between consecutive keyframes using InterpNet. Moreover, the proposed scheme enables further increase in frame rate without retraining the network, by applying InterpNet recursively between pairs of sharp frames. We evaluate the proposed method on several datasets, including a novel dataset captured with a Sony RX V camera. We also demonstrate its performance of increasing the frame rate up to 20 times on real blurry videos.

Related Material


[pdf]
[bibtex]
@InProceedings{Jin_2019_CVPR,
author = {Jin, Meiguang and Hu, Zhe and Favaro, Paolo},
title = {Learning to Extract Flawless Slow Motion From Blurry Videos},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}