-
[pdf]
[supp]
[bibtex]@InProceedings{Zhong_2025_ICCV, author = {Zhong, Yong and Yang, Zhuoyi and Teng, Jiayan and Gu, Xiaotao and Li, Chongxuan}, title = {Concat-ID: Towards Universal Identity-Preserving Video Synthesis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {1906-1915} }
Concat-ID: Towards Universal Identity-Preserving Video Synthesis
Abstract
We present Concat-ID, a unified framework for identity-preserving video generation. Concat-ID employs variational autoencoders to extract image features, which are then concatenated with video latents along the sequence dimension. It relies exclusively on inherent 3D self-attention mechanisms to incorporate them, eliminating the need for additional parameters or modules. A novel cross-video pairing strategy and a multi-stage training regimen are introduced to balance identity consistency and facial editability while enhancing video naturalness. Extensive experiments demonstrate Concat-ID's superiority over existing methods in both single and multi-identity generation, as well as its seamless scalability to multi-subject scenarios, including virtual try-on and background-controllable generation. Concat-ID establishes a new benchmark for identity-preserving video synthesis, providing a versatile and scalable solution for a wide range of applications.
Related Material
