Latent-Reframe: Enabling Camera Control for Video Diffusion Model without Training


Complex Pose Results of Latent-Reframe

Abstract

Precise camera pose control is crucial for video generation with diffusion models. Existing methods require fine-tuning with additional datasets containing paired videos and camera pose annotations, which are both data-intensive and computationally costly, and can disrupt the pre-trained model’s distribution. We introduce Latent-Reframe, which enables camera control in a pre-trained video diffusion model without fine-tuning. Unlike existing methods, Latent-Reframe operates during the sampling stage, maintaining efficiency while preserving the original model distribution. Our approach reframes the latent code of video frames to align with the input camera trajectory through time-aware point clouds. Latent code inpainting and harmonization then refine the model's latent space, ensuring high-quality video generation. Experimental results demonstrate that Latent-Reframe achieves comparable or superior camera control precision and video quality to training-based methods, without the need for fine-tuning on additional datasets.

Method

Method

Video Comparison with SOTA

Different Base Model of Latent-Reframe

Basic Rotational Results of Latent-Reframe

Basic Translational Results of Latent-Reframe

Different Style Results of Latent-Reframe

Image-to-Video Results of Latent-Reframe

Higher Resolution / Longer Video Results of Latent-Reframe