-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Siddiqui_2025_WACV, author = {Siddiqui, Nyle and Croitoru, Florinel Alin and Nayak, Gaurav Kumar and Ionescu, Radu Tudor and Shah, Mubarak}, title = {DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1608-1617} }
DLCR: A Generative Data Expansion Framework via Diffusion for Clothes-Changing Person Re-ID
Abstract
With the recent exhibited strength of generative diffusion models an open research question is if images generated by these models can be used to learn better visual representations. While this generative data expansion may suffice for easier visual tasks we explore its efficacy on a more difficult discriminative task: clothes-changing person re-identification (CC-ReID). CC-ReID aims to match people appearing in non-overlapping cameras even when they change their clothes across cameras. Not only are current CC-ReID models constrained by the limited diversity of clothing in current CC-ReID datasets but generating additional data that retains important personal features for accurate identification is a current challenge. To address this issue we propose DLCR a novel data expansion framework that leverages pre-trained diffusion and large language models (LLMs) to accurately generate diverse images of individuals in varied attire. We generate additional data for five benchmark CC-ReID datasets (PRCC CCVID LaST VC-Clothes and LTCC) and increase their clothing diversity by 10X totaling over 2.1M images generated. DLCR employs diffusion-based text-guided inpainting conditioned on clothing prompts constructed using LLMs to generate synthetic data that only modifies a subject's clothes while preserving their personally identifiable features. With this massive increase in data we introduce two novel strategies - progressive learning and test-time prediction refinement - that respectively reduce training time and further boosts CC-ReID performance. On the PRCC dataset we obtain a large top-1 accuracy improvement of 11.3% by training CAL a previous state of the art (SOTA) method with DLCR-generated data. We publicly release our code and generated data for each dataset here: https://github.com/CroitoruAlin/dlcr.
Related Material