Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation

Xiwen Wei, Guihong Li, Radu Marculescu; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 6634-6645

Abstract


Catastrophic forgetting is a significant challenge in online continual learning (OCL) especially for non-stationary data streams that do not have well-defined task boundaries. This challenge is exacerbated by the memory constraints and privacy concerns inherent in rehearsal buffers. To tackle catastrophic forgetting in this paper we introduce Online-LoRA a novel framework for task-free OCL. Online-LoRA allows to finetune pre-trained Vision Transformer (ViT) models in real-time to address the limitations of rehearsal buffers and leverage pre-trained models' performance benefits. As the main contribution our approach features a novel online weight regularization strategy to identify and consolidate important model parameters. Moreover Online-LoRA leverages the training dynamics of loss values to enable the automatic recognition of the data distribution shifts. Extensive experiments across many task-free OCL scenarios and benchmark datasets (including CIFAR-100 ImageNet-R ImageNet-S CUB-200 and CORe50) demonstrate that Online-LoRA can be robustly adapted to various ViT architectures while achieving better performance compared to SOTA methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wei_2025_WACV, author = {Wei, Xiwen and Li, Guihong and Marculescu, Radu}, title = {Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6634-6645} }