Video Interpolation with Diffusion Models

Siddhant Jain, Daniel Watson, Eric Tabellion, Aleksander Ho?ynski, Ben Poole, Janne Kontkanen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 7341-7351

Abstract


We present VIDIM a generative model for video interpolation which creates short videos given a start and end frame. In order to achieve high fidelity and generate motions unseen in the input data VIDIM uses cascaded diffusion models to first generate the target video at low resolution and then generate the high-resolution video conditioned on the low-resolution generated video. We compare VIDIM to previous state-of-the-art methods on video interpolation and demonstrate how such works fail in most settings where the underlying motion is complex nonlinear or ambiguous while VIDIM can easily handle such cases. We additionally demonstrate how classifier-free guidance on the start and end frame and conditioning the superresolution model on the original high-resolution frames without additional parameters unlocks high-fidelity results. VIDIM is fast to sample from as it jointly denoises all the frames to be generated requires less than a billion parameters per diffusion model to produce compelling results and still enjoys scalability and improved quality at larger parameter counts. Please see our project page at vidiminterpolation.github.io.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jain_2024_CVPR, author = {Jain, Siddhant and Watson, Daniel and Tabellion, Eric and Ho?ynski, Aleksander and Poole, Ben and Kontkanen, Janne}, title = {Video Interpolation with Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {7341-7351} }