-
[pdf]
[bibtex]@InProceedings{Sarkar_2022_CVPR, author = {Sarkar, Rohan and Bodla, Navaneeth and Vasileva, Mariya and Lin, Yen-Liang and Beniwal, Anurag and Lu, Alan and Medioni, Gerard}, title = {OutfitTransformer: Outfit Representations for Fashion Recommendation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {2263-2267} }
OutfitTransformer: Outfit Representations for Fashion Recommendation
Abstract
Predicting outfit compatibility and retrieving complementary items are critical components for a fashion recommendation system. We present a scalable framework, OutfitTransformer, that learns compatibility of the entire outfit and supports large-scale complementary item retrieval. We model outfits as an unordered set of items and leverage self-attention mechanism to learn the relationships between items. We train the framework using a proposed set-wise outfit ranking loss to generate a target item embedding given an outfit, and a target item specification. The generated target item embedding is then used to retrieve compatible items that match the outfit. Experimental results demonstrate that our approach outperforms state-of-the-art methods on compatibility prediction, fill-in-the-blank, and complementary item retrieval tasks.
Related Material