Deep Shutter Unrolling Network

Peidong Liu, Zhaopeng Cui, Viktor Larsson, Marc Pollefeys; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 5941-5949

Abstract


We present a novel network for rolling shutter effect correction. Our network takes two consecutive rolling shutter images and estimates the corresponding global shutter image of the latest frame. The dense displacement field from a rolling shutter image to its corresponding global shutter image is estimated via a motion estimation network. The learned feature representation of a rolling shutter image is then warped, via the displacement field, to its global shutter representation by a differentiable forward warping block. An image decoder recovers the global shutter image based on the warped feature representation. Our network can be trained end-to-end and only requires the global shutter image for supervision. Since there is no public dataset available, we also propose two large datasets: the Carla-RS dataset and the Fastec-RS dataset. Experimental results demonstrate that our network outperforms the state-of-the-art methods. We make both our code and datasets available at https://github.com/ethliup/DeepUnrollNet.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liu_2020_CVPR,
author = {Liu, Peidong and Cui, Zhaopeng and Larsson, Viktor and Pollefeys, Marc},
title = {Deep Shutter Unrolling Network},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}