-
[pdf]
[code]
[bibtex]@InProceedings{Phan_2022_ACCV, author = {Phan, Lam and Nguyen, Hiep Thi Hong and Warrier, Harikrishna and Gupta, Yogesh}, title = {Patch Embedding as Local Features: Unifying Deep Local and Global Features Via Vision Transformer for Image Retrieval}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {2527-2544} }
Patch Embedding as Local Features: Unifying Deep Local and Global Features Via Vision Transformer for Image Retrieval
Abstract
Image retrieval is the task of finding all images in the database
that are similar to a query image. Two types of image representations
have been studied to address this task: global and local image features.
Those features can be extracted separately or jointly in a single model.
State-of-the-art methods usually learn them with Convolutional Neural Networks (CNNs) and perform retrieval with multi-scale image representation. This paper's main contribution is to unify global and local features with Vision Transformers (ViTs) and multi-atrous convolutions for high-performing retrieval. We refer to the new model as ViTGaL, standing for Vision Transformer based Global and Local features (ViTGaL). Specifically, we add a multi-atrous convolution to the output of the transformer encoder layer of ViTs to simulate the image pyramid used in standard image retrieval algorithms. We use class attention to aggregate the token embeddings output from the multi-atrous layer to get both global and local features. The entire network can be learned end-to-end, requiring only image-level labels. Extensive experiments show the proposed method outperforms the state-of-the-art methods on the Revisited Oxford and Paris datasets.
Related Material