Synthesizing Coherent Story With Auto-Regressive Latent Diffusion Models

Xichen Pan, Pengda Qin, Yuhong Li, Hui Xue, Wenhu Chen; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 2920-2930

Abstract


Conditioned diffusion models have demonstrated state-of-the-art text-to-image synthesis capacity. Recently, most works focus on synthesizing independent images; While for real-world applications, it is common and necessary to generate a series of coherent images for story-stelling. In this work, we mainly focus on story visualization and continuation tasks and propose AR-LDM, a latent diffusion model auto-regressively conditioned on history captions and generated images. Moreover, AR-LDM can generalize to new characters through adaptation. To our best knowledge, this is the first work successfully leveraging diffusion models for coherent visual story synthesizing. It also extends the text-conditioned method to multimodal conditioning. Quantitative results show that AR-LDM achieves SoTA FID scores on PororoSV, FlintstonesSV, and the adopted challenging dataset VIST containing natural images. Large-scale human evaluations show that AR-LDM has superior performance in terms of quality, relevance, and consistency.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Pan_2024_WACV, author = {Pan, Xichen and Qin, Pengda and Li, Yuhong and Xue, Hui and Chen, Wenhu}, title = {Synthesizing Coherent Story With Auto-Regressive Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {2920-2930} }