-
[pdf]
[supp]
[bibtex]@InProceedings{Sharma_2021_WACV, author = {Sharma, Vivek and Murray, Naila and Larlus, Diane and Sarfraz, Saquib and Stiefelhagen, Rainer and Csurka, Gabriela}, title = {Unsupervised Meta-Domain Adaptation for Fashion Retrieval}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {1348-1357} }
Unsupervised Meta-Domain Adaptation for Fashion Retrieval
Abstract
Unsupervised Meta-Domain Adaptation for Fashion RetrievalCross-domain fashion item retrieval naturally arises when unconstrained consumer images are used to query for fashion items in a collection of high-end photographs provided by retailers. To perform this task, approaches typically leverage both consumer and shop domains from a given dataset to learn a domain invariant representation, allowing these images of different nature to be directly compared. When consumer images are not available beforehand, such training is impossible. In this paper, we focus on this challenging and yet practical scenario, and we propose instead to leverage representations learned for cross-domain retrieval from another source dataset and to adapt them to the target dataset for this particular setting. More precisely, we bypass the lack of consumer images and directly target the more challenging meta-domain gap which occurs between consumer images and shop images, independently of their dataset. Assuming that datasets share some similar fashion items, we cluster their shop images and leverage the clusters to automatically generate pseudo-labels. Those are used to associate consumer and shop images across datasets, which in turn allows to learn meta-domain-invariant representations suitable for cross-domain retrieval in the target dataset.
Related Material