-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Qing_2024_CVPR, author = {Qing, Zhiwu and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Wei, Yujie and Zhang, Yingya and Gao, Changxin and Sang, Nong}, title = {Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {6635-6645} }
Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation
Abstract
Despite diffusion models having shown powerful abilities to generate photorealistic images generating videos that are realistic and diverse still remains in its infancy. One of the key reasons is that current methods intertwine spatial content and temporal dynamics together leading to a notably increased complexity of text-to-video generation (T2V). In this work we propose HiGen a diffusion model-based method that improves performance by decoupling the spatial and temporal factors of videos from two perspectives i.e. structure level and content level. At the structure level we decompose the T2V task into two steps including spatial reasoning and temporal reasoning using a unified denoiser. Specifically we generate spatially coherent priors using text during spatial reasoning and then generate temporally coherent motions from these priors during temporal reasoning. At the content level we extract two subtle cues from the content of the input video that can express motion and appearance changes respectively. These two cues then guide the model's training for generating videos enabling flexible content variations and enhancing temporal stability. Through the decoupled paradigm HiGen can effectively reduce the complexity of this task and generate realistic videos with semantics accuracy and motion stability. Extensive experiments demonstrate the superior performance of HiGen over the state-of-the-art T2V methods. We have released our source code and models.
Related Material