Comparing Correspondences: Video Prediction With Correspondence-Wise Losses

Daniel Geng, Max Hamilton, Andrew Owens; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 3365-3376

Abstract


Image prediction methods often struggle on tasks that require changing the positions of objects, such as video prediction, producing blurry images that average over the many positions that objects might occupy. In this paper, we propose a simple change to existing image similarity metrics that makes them more robust to positional errors: we match the images using optical flow, then measure the visual similarity of corresponding pixels. This change leads to crisper and more perceptually accurate predictions, and does not require modifications to the image prediction network. We apply our method to a variety of video prediction tasks, where it obtains strong performance with simple network architectures, and to the closely related task of video interpolation. Code and results are available at our webpage: https://dangeng.github.io/CorrWiseLosses

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Geng_2022_CVPR, author = {Geng, Daniel and Hamilton, Max and Owens, Andrew}, title = {Comparing Correspondences: Video Prediction With Correspondence-Wise Losses}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {3365-3376} }