Boosting Few-shot Action Recognition with Graph-guided Hybrid Matching

Jiazheng Xing, Mengmeng Wang, Yudi Ruan, Bofan Chen, Yaowei Guo, Boyu Mu, Guang Dai, Jingdong Wang, Yong Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 1740-1750

Abstract


Class prototype construction and matching are core aspects of few-shot action recognition. Previous methods mainly focus on designing spatiotemporal relation modeling modules or complex temporal alignment algorithms. Despite the promising results, they ignored the value of class prototype construction and matching, leading to unsatisfactory performance in recognizing similar categories in every task. In this paper, we propose GgHM, a new framework with Graph-guided Hybrid Matching. Concretely, we learn task-oriented features by the guidance of a graph neural network during class prototype construction, optimizing the intra- and inter-class feature correlation explicitly. Next, we design a hybrid matching strategy, combining frame-level and tuple-level matching to classify videos with multivariate styles. We additionally propose a learnable dense temporal modeling module to enhance the video feature temporal representation to build a more solid foundation for the matching process. GgHM shows consistent improvements over other challenging baselines on several few-shot datasets, demonstrating the effectiveness of our method. The code will be publicly available at https://github.com/jiazheng-xing/GgHM.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Xing_2023_ICCV, author = {Xing, Jiazheng and Wang, Mengmeng and Ruan, Yudi and Chen, Bofan and Guo, Yaowei and Mu, Boyu and Dai, Guang and Wang, Jingdong and Liu, Yong}, title = {Boosting Few-shot Action Recognition with Graph-guided Hybrid Matching}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {1740-1750} }