Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring

Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3883-3891

Abstract


Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.

Related Material


[pdf] [Supp] [arXiv] [video]
[bibtex]
@InProceedings{Nah_2017_CVPR,
author = {Nah, Seungjun and Hyun Kim, Tae and Mu Lee, Kyoung},
title = {Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}