-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Gowda_2024_ACCV, author = {Gowda, Shreyank N and Moltisanti, Davide and Sevilla-Lara, Laura}, title = {Continual Learning Improves Zero-Shot Action Recognition}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {3239-3256} }
Continual Learning Improves Zero-Shot Action Recognition
Abstract
Zero-shot learning in action recognition requires a strong ability to generalize from pre-training and seen classes to novel unseen classes. Similarly, the area of continual learning addresses the problem of catastrophic forgetting and also aims to create models with generalization power that can learn new tasks without forgetting previous ones. While these two areas completely aligned goals their technologies have never been combined. In this paper we propose a novel generative model which acts as glue to build on two stepping stones: a feature generation network for zero-shot learning and memory replay for continual learning. This model, which we call Generative Iterative Learning (GIL) creates a memory of synthesized features of past classes as well as real novel ones. This memory is used to retrain the classification model, ensuring a balanced exposure to the old and the new. Experiments reveal that GIL alleviates catastrophic forgetting and improves generalization in unseen classes, which improves zero-shot recognition across multiple benchmarks and settings.
Related Material