Flow-Guided Video Inpainting With Scene Templates

Dong Lao, Peihao Zhu, Peter Wonka, Ganesh Sundaramoorthi; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14599-14608

Abstract


We consider the problem of filling in missing spatio-temporal regions of a video. We provide a novel flow-based solution by introducing a generative model of images in relation to the scene (without missing regions) and mappings from the scene to images. We use the model to jointly infer the scene template, a 2D representation of the scene, and the mappings. This ensures consistency of the frame-to-frame flows generated to the underlying scene, reducing geometric distortions in flow-based inpainting. The template is mapped to the missing regions in the video by a new (L2-L1) interpolation scheme, creating crisp inpaintings, reducing common blur and distortion artifacts. We show on two benchmark datasets that our approach outperforms state-of-the-art quantitatively and in user studies.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lao_2021_ICCV, author = {Lao, Dong and Zhu, Peihao and Wonka, Peter and Sundaramoorthi, Ganesh}, title = {Flow-Guided Video Inpainting With Scene Templates}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14599-14608} }