CCEdit: Creative and Controllable Video Editing via Diffusion Models

Ruoyu Feng, Wenming Weng, Yanhui Wang, Yuhui Yuan, Jianmin Bao, Chong Luo, Zhibo Chen, Baining Guo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 6712-6722

Abstract


In this paper we present CCEdit a versatile generative video editing framework based on diffusion models. Our approach employs a novel trident network structure that separates structure and appearance control ensuring precise and creative editing capabilities. Utilizing the foundational ControlNet architecture we maintain the structural integrity of the video during editing. The incorporation of an additional appearance branch enables users to exert fine-grained control over the edited key frame. These two side branches seamlessly integrate into the main branch which is constructed upon existing text-to-image (T2I) generation models through learnable temporal layers. The versatility of our framework is demonstrated through a diverse range of choices in both structure representations and personalized T2I models as well as the option to provide the edited key frame. To facilitate comprehensive evaluation we introduce the BalanceCC benchmark dataset comprising 100 videos and 4 target prompts for each video. Our extensive user studies compare CCEdit with eight state-of-the-art video editing methods. The outcomes demonstrate CCEdit's substantial superiority over all other methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Feng_2024_CVPR, author = {Feng, Ruoyu and Weng, Wenming and Wang, Yanhui and Yuan, Yuhui and Bao, Jianmin and Luo, Chong and Chen, Zhibo and Guo, Baining}, title = {CCEdit: Creative and Controllable Video Editing via Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {6712-6722} }