-
[pdf]
[supp]
[bibtex]@InProceedings{Gu_2025_WACV, author = {Gu, Yanan and Yang, Muli and Yang, Xu and Wei, Kun and Zhu, Hongyuan and Goenawan, Gabriel James and Deng, Cheng}, title = {Dynamic Adapter Tuning for Long-Tailed Class-Incremental Learning}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {8165-8174} }
Dynamic Adapter Tuning for Long-Tailed Class-Incremental Learning
Abstract
Long-tailed class-incremental learning (LT-CIL) aims to learn new classes continuously from a long-tailed data stream while simultaneously dealing with challenges such as imbalanced learning of tail classes and catastrophic forgetting. To address these challenges most existing methods employ a two-stage strategy by initializing model training from scratch with further balanced knowledge driven calibration. This strategy faces challenges in deriving discriminative features from cold-started backbones for the long-tailed distribution of data consequently leading to relatively diminished performance. In this paper with the powerful feature extraction capability of pre-trained foundation models we have achieved a one-stage approach that delivers superior performance. Specifically we propose Dynamic Adapter Tuning (DAT) which employs a dynamic adapter cache mechanism to adapt a pre-trained model to learn tasks sequentially. The adapter in the cache is either dynamically selected or created according to task similarity and further compactified with the new task's adapter to mitigate cross-task and cross-class gaps in LT-CIL significantly alleviating catastrophic forgetting and imbalance learning issues respectively. With extensive experimental validation our method consistently achieves state-of-the-art performance under the challenging LT-CIL setting.
Related Material