InstanceCap: Improving Text-to-Video Generation via Instance-aware Structured Caption

Tiehan Fan, Kepan Nan, Rui Xie, Penghao Zhou, Zhenheng Yang, Chaoyou Fu, Xiang Li, Jian Yang, Ying Tai; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 28974-28983

Abstract


Text-to-video generation has evolved rapidly in recent years, delivering remarkable results. Training typically relies on video-caption paired data, which plays a crucial role in enhancing generation performance. However, current video captions often suffer from insufficient details, hallucinations and imprecise motion depiction, affecting the fidelity and consistency of generated videos. In this work, we propose a novel instance-aware structured caption framework, termed \mathtt InstanceCap , to achieve instance-level and fine-grained video caption for the first time. Based on this scheme, we design an auxiliary models cluster to convert original video into instances to enhance instance fidelity. Video instances are further used to refine dense prompts into structured phrases, achieving concise yet precise descriptions. Furthermore, a 22K \mathtt InstanceVid dataset is curated for training, and an enhancement pipeline that tailored to \mathtt InstanceCap structure is proposed for inference. Experimental results demonstrate that our proposed \mathtt InstanceCap significantly outperform previous models, ensuring high fidelity between captions and videos while reducing hallucinations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Fan_2025_CVPR, author = {Fan, Tiehan and Nan, Kepan and Xie, Rui and Zhou, Penghao and Yang, Zhenheng and Fu, Chaoyou and Li, Xiang and Yang, Jian and Tai, Ying}, title = {InstanceCap: Improving Text-to-Video Generation via Instance-aware Structured Caption}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {28974-28983} }