Improving Few-shot Learning by Spatially-aware Matching and CrossTransformer

Hongguang Zhang, Philip H. S. Torr, Piotr Koniusz; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 1298-1315

Abstract


Current few-shot learning models capture visual object relations in the so-called meta-learning setting under a fixed-resolution input. However, such models have a limited generalization ability under the scale and location mismatch between objects, as only few samples from target classes are provided. Therefore, the lack of a mechanism to match the scale and location between pairs of compared images leads to the performance degradation. The importance of image contents varies across coarse-to-fine scales depending on the object and its class label, e. g., generic objects and scenes rely on their global appearance while fine-grained objects rely more on their localized visual patterns. In this paper, we study the impact of scale and location mismatch in the few-shot learning scenario, and propose a novel Spatially-aware Matching (SM) scheme to effectively perform matching across multiple scales and locations, and learn image relations by giving the highest weights to the best matching pairs. The SM is trained to activate the most related locations and scales between support and query data. We apply and evaluate SM on various few-shot learning models and backbones for comprehensive evaluations. Furthermore, we leverage an auxiliary self-supervisory discriminator to train/predict the spatial- and scale-level index of feature vectors we use. Finally, we develop a novel transformer-based pipeline to exploit self- and cross-attention in a spatially-aware matching process. Our proposed design is orthogonal to the choice of backbone and/or comparator.

Related Material


[pdf] [arXiv] [code]
[bibtex]
@InProceedings{Zhang_2022_ACCV, author = {Zhang, Hongguang and Torr, Philip H. S. and Koniusz, Piotr}, title = {Improving Few-shot Learning by Spatially-aware Matching and CrossTransformer}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1298-1315} }