StableVideo: Text-driven Consistency-aware Diffusion Video Editing

Wenhao Chai, Xun Guo, Gaoang Wang, Yan Lu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 23040-23050

Abstract


Diffusion-based methods can generate realistic images and videos, but they struggle to edit existing objects in a video while preserving their geometry over time. This prevents diffusion models from being applied to natural video editing. In this paper, we tackle this problem by introducing temporal dependency to existing text-driven diffusion models, which allows them to generate consistent appearance for the new objects. Specifically, we develop a novel inter-frame propagation mechanism for diffusion video editing, which leverages the concept of layered representations to propagate the geometry and appearance information from one frame to the next. We then build up a text-driven video editing framework based on this mechanism, namely StableVideo, which can achieve consistency-aware video editing. Extensive qualitative experiments demonstrate the strong editing capability of our approach. Compared with state-of-the-art video editing methods, our approach shows superior qualitative and quantitative results.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chai_2023_ICCV, author = {Chai, Wenhao and Guo, Xun and Wang, Gaoang and Lu, Yan}, title = {StableVideo: Text-driven Consistency-aware Diffusion Video Editing}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {23040-23050} }