VDFlow: Joint Learning for Optical Flow and Video Deblurring

Yanyang Yan, Qingbo Wu, Bo Xu, Jingang Zhang, Wenqi Ren; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 872-873

Abstract


Video deblurring is a challenging task as the blur in videos is caused by the combination of camera motion, object moving and depth variation. Recent deep neural networks improve the performance of video deblurring by using the concatenated neighboring frames to estimate the latent images directly. In this paper, we propose a united end-to-end network, called VDFlow, for both optical flow estimation and video deblurring simultaneously. The VDFlow contains two branches where feature representations are bi-directional propagated. The deblurring branch employs an encoder-decoder style network while the optical flow branch is based on the FlowNet network. The optical flow is no longer a tool for alignment but serves as an information carrier of motion trajectories, which helps to restore the latent sharp frames. Extensive experiments demonstrate that the proposed method performs favorably against the state-of-the-art video deblurring approaches on challenging blurry videos and improves the performance of optical flow estimation as well.

Related Material


[pdf]
[bibtex]
@InProceedings{Yan_2020_CVPR_Workshops,
author = {Yan, Yanyang and Wu, Qingbo and Xu, Bo and Zhang, Jingang and Ren, Wenqi},
title = {VDFlow: Joint Learning for Optical Flow and Video Deblurring},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}