Internal Video Inpainting by Implicit Long-Range Propagation

Hao Ouyang, Tengfei Wang, Qifeng Chen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14579-14588

Abstract


We propose a novel framework for video inpainting by adopting an internal learning strategy. Unlike previous methods that use optical flow for cross-frame context propagation to inpaint unknown regions, we show that this can be achieved implicitly by fitting a convolutional neural network to known regions. Moreover, to handle challenging sequences with ambiguous backgrounds or long-term occlusion, we design two regularization terms to preserve high-frequency details and long-term temporal consistency. Extensive experiments on the DAVIS dataset demonstrate that the proposed method achieves state-of-the-art inpainting quality quantitatively and qualitatively. We further extend the proposed method to another challenging task: learning to remove an object from a video giving a single object mask in only one frame in a 4K video.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Ouyang_2021_ICCV, author = {Ouyang, Hao and Wang, Tengfei and Chen, Qifeng}, title = {Internal Video Inpainting by Implicit Long-Range Propagation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14579-14588} }