Free-Form Video Inpainting With 3D Gated Convolution and Temporal PatchGAN

Ya-Liang Chang, Zhe Yu Liu, Kuan-Ying Lee, Winston Hsu; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 9066-9075

Abstract


Free-form video inpainting is a very challenging task that could be widely used for video editing such as text removal. Existing patch-based methods could not handle non-repetitive structures such as faces, while directly applying image-based inpainting models to videos will result in temporal inconsistency (see this https://www.youtube.com/watch?v=BuTYfo4bO2I&list=PLnEeMdoBDCISRm0EZYFcQuaJ5ITUaaEIb&index=1). In this paper, we introduce a deep learning based free-form video inpainting model, with proposed 3D gated convolutions to tackle the uncertainty of free-form masks and a novel Temporal PatchGAN loss to enhance temporal consistency. In addition, we collect videos and design a free-form mask generation algorithm to build the free-form video inpainting (FVI) dataset for training and evaluation of video inpainting models. We demonstrate the benefits of these components and experiments on both the FaceForensics and our FVI dataset suggest that our method is superior to existing ones. Related source code, full-resolution result videos and the FVI dataset could be found on Github: https://github.com/amjltc295/Free-Form-Video-Inpainting

Related Material


[pdf]
[bibtex]
@InProceedings{Chang_2019_ICCV,
author = {Chang, Ya-Liang and Liu, Zhe Yu and Lee, Kuan-Ying and Hsu, Winston},
title = {Free-Form Video Inpainting With 3D Gated Convolution and Temporal PatchGAN},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}