-
[pdf]
[supp]
[bibtex]@InProceedings{Lai_2025_ICCV, author = {Lai, Xinyi and Lin, Luojun and Chen, Weijie and Yu, Yuanlong}, title = {A Tiny Change, A Giant Leap: Long-Tailed Class-Incremental Learning via Geometric Prototype Alignment}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {1444-1453} }
A Tiny Change, A Giant Leap: Long-Tailed Class-Incremental Learning via Geometric Prototype Alignment
Abstract
Long-Tailed Class-Incremental Learning (LT-CIL) remains a fundamental challenge due to biased gradient updates caused by highly imbalanced data distributions and the inherent stability-plasticity dilemma. These factors jointly degrade tail-class performance and exacerbate catastrophic forgetting. To tackle these issues, we propose Geometric Prototype Alignment (GPA), a model-agnostic approach that calibrates classifier learning dynamics via geometric feature-space alignment. GPA initializes classifier weights by projecting frozen class prototypes onto a unit hypersphere, thereby disentangling magnitude imbalance from angular discriminability. During incremental updates, a Dynamic Anchoring mechanism adaptively adjusts classifier weights to preserve geometric consistency, effectively balancing plasticity for new classes with stability for previously acquired knowledge. Integrated into state-of-the-art CIL frameworks such as LUCIR and DualPrompt, GPA yields substantial gains, improving average incremental accuracy by 6.11% and reducing forgetting rates by 6.38% on CIFAR100-LT. Theoretical analysis further demonstrates that GPA accelerates convergence by 2.7X and produces decision boundaries approaching Fisher-optimality. Our implementation is available at https://github.com/laixinyi023/Geometric-Prototype-Alignment.
Related Material
