Pathways on the Image Manifold: Image Editing via Video Generation

Noam Rotstein, Gal Yona, Daniel Silver, Roy Velich, David Bensaid, Ron Kimmel; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 7857-7866

Abstract


Recent advances in image editing, driven by image diffusion models, have shown remarkable progress. However, significant challenges remain, as these models often struggle to follow complex edit instructions accurately and frequently compromise fidelity by altering key elements of the original image. Simultaneously, video generation has made remarkable strides, with models that effectively function as consistent and continuous world simulators. In this paper, we propose merging these two fields by utilizing image-to-video models for image editing. We reformulate image editing as a temporal process, using pretrained video models to create smooth transitions from the original image to the desired edit. This approach traverses the image manifold continuously, ensuring consistent edits while preserving the original image's key aspects. Our approach achieves state-of-the-art results on text-based image editing, demonstrating significant improvements in both edit accuracy and image preservation. Visit our project page: https://rotsteinnoam.github.io/Frame2Frame/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Rotstein_2025_CVPR, author = {Rotstein, Noam and Yona, Gal and Silver, Daniel and Velich, Roy and Bensaid, David and Kimmel, Ron}, title = {Pathways on the Image Manifold: Image Editing via Video Generation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {7857-7866} }