DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better

Orest Kupyn, Tetiana Martyniuk, Junru Wu, Zhangyang Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8878-8887

Abstract


We present a new end-to-end generative adversarial network (GAN) for single image motion deblurring, named DeblurGAN-V2, which considerably boosts state-of-the-art deblurring performance while being much more flexible and efficient. DeblurGAN-V2 is based on a relativistic conditional GAN with a double-scale discriminator. For the first time, we introduce the Feature Pyramid Network into deblurring, as a core building block in the generator of DeblurGAN-V2. It can flexibly work with a wide range of backbones, to navigate the balance between performance and efficiency. The plug-in of sophisticated backbones (e.g. Inception ResNet v2) can lead to solid state-of-the-art performance. Meanwhile, with light-weight backbones (e.g. MobileNet and its variants), DeblurGAN-V2 becomes 10-100 times faster than the nearest competitors, while maintaining close to state-of-the-art results, implying the option of real-time video deblurring. We demonstrate that DeblurGAN-V2 has very competitive performance on several popular benchmarks, in terms of deblurring quality (both objective and subjective), as well as efficiency. In addition, we show the architecture to be effective for general image restoration tasks too. Our models and codes will be made available upon acceptance.

Related Material


[pdf]
[bibtex]
@InProceedings{Kupyn_2019_ICCV,
author = {Kupyn, Orest and Martyniuk, Tetiana and Wu, Junru and Wang, Zhangyang},
title = {DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}