Dress Like a Star: Retrieving Fashion Products From Videos

Noa Garcia, George Vogiatzis; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2293-2299

Abstract


This work proposes a system for retrieving clothing and fashion products from video content. Although films and television are the perfect showcase for fashion brands to promote their products, spectators are not always aware of where to buy the latest trends they see on screen. Here, a framework for breaking the gap between fashion products shown on videos and users is presented. By relating clothing items and video frames in an indexed database and performing frame retrieval with temporal aggregation and fast indexing techniques, we can find fashion products from videos in a simple and non-intrusive way. Experiments in a large-scale dataset conducted here show that, by using the proposed framework, memory requirements can be reduced by 42.5X with respect to linear search, whereas accuracy is maintained at around 90%.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Garcia_2017_ICCV,
author = {Garcia, Noa and Vogiatzis, George},
title = {Dress Like a Star: Retrieving Fashion Products From Videos},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}