DELTA: Decoupling Long-Tailed Online Continual Learning

Siddeshwar Raghavan, Jiangpeng He, Fengqing Zhu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4054-4064

Abstract


A significant challenge in achieving ubiquitous Artificial Intelligence is the limited ability of models to rapidly learn new information in real-world scenarios where data follows long-tailed distributions all while avoiding forgetting previously acquired knowledge. In this work we study the under-explored problem of Long-Tailed Online Continual Learning (LTOCL) which aims to learn new tasks from sequentially arriving class-imbalanced data streams. Each data is observed only once for training without knowing the task data distribution. We present DELTA a decoupled learning approach designed to enhance learning representations and address the substantial imbalance in LTOCL. We enhance the learning process by adapting supervised contrastive learning to attract similar samples and repel dissimilar (out-of-class) samples. Further by balancing gradients during training using an equalization loss DELTA significantly enhances learning outcomes and successfully mitigates catastrophic forgetting. Through extensive evaluation we demonstrate that DELTA improves the capacity for incremental learning surpassing existing OCL methods. Our results suggest considerable promise for applying OCL in real-world applications.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Raghavan_2024_CVPR, author = {Raghavan, Siddeshwar and He, Jiangpeng and Zhu, Fengqing}, title = {DELTA: Decoupling Long-Tailed Online Continual Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4054-4064} }