-
[pdf]
[arXiv]
[bibtex]@InProceedings{Mishra_2021_CVPR, author = {Mishra, Samarth and Zhang, Zhongping and Shen, Yuan and Kumar, Ranjitha and Saligrama, Venkatesh and Plummer, Bryan}, title = {Effectively Leveraging Attributes for Visual Similarity}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3904-3909} }
Effectively Leveraging Attributes for Visual Similarity
Abstract
Measuring similarity between two images often requiresperforming complex reasoning along different axes (e.g.,color, texture, or shape). Insights into what might be important for measuring similarity can be provided by annotated attributes. Prior work tends to view these annotations as complete, resulting in them using a simplistic approach of predicting attributes on single images, which are, in turn, used to measure similarity. However, it is impractical for a dataset to fully annotate every attribute that may be important. Thus, only representing images based on these incomplete annotations may miss out on key information. To address this issue, we propose the Pairwise Attribute-informed similarity Network (PAN), which breaks similarity learning into capturing similarity conditions and relevance scores from a joint representation of two images. This enables our model to identify that two images contain the same attribute, but can have it deemed irrelevant (e.g., due to fine-grained differences between them) and ignored for measuring similarity between the two images. Notably, while prior methods of using attribute annotations are often unable to outperform prior art, PAN obtains a 4-9% improvement on compatibility prediction between clothing items on Polyvore Outfits and a 5% gain on few shot classification of images using Caltech-UCSD Birds (CUB), and over 1% boost to Recall@1 on In-Shop Clothes Retrieval
Related Material