Online Video Deblurring via Dynamic Temporal Blending Network

Tae Hyun Kim, Kyoung Mu Lee, Bernhard Scholkopf, Michael Hirsch; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4038-4047


State-of-the-art video deblurring methods are capable of removing non-uniform blur caused by unwanted camera shake and/or object motion in dynamic scenes. However, most existing methods are based on batch processing and thus need access to all recorded frames, rendering them computationally demanding and time-consuming and thus limiting their practical use. In contrast, we propose an online (sequential) video deblurring method based on a spatio-temporal recurrent network that allows for real-time performance. In particular, we introduce a novel architecture which extends the receptive field while keeping the overall size of the network small to enable fast execution. In doing so, our network is able to remove even large blur caused by strong camera shake and/or fast moving objects. Furthermore, we propose a novel network layer that enforces temporal consistency between consecutive frames by dynamic temporal blending which compares and adaptively (at test time) shares features obtained at different time steps. We show the superiority of the proposed method in an extensive experimental evaluation.

Related Material

[pdf] [arXiv]
author = {Hyun Kim, Tae and Mu Lee, Kyoung and Scholkopf, Bernhard and Hirsch, Michael},
title = {Online Video Deblurring via Dynamic Temporal Blending Network},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}