Local and Global Logit Adjustments for Long-Tailed Learning

Yingfan Tao, Jingna Sun, Hao Yang, Li Chen, Xu Wang, Wenming Yang, Daniel Du, Min Zheng; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11783-11792

Abstract


Multi-expert ensemble models for long-tailed learning typically either learn diverse generalists from the whole dataset or aggregate specialists on different subsets. However, the former is insufficient for tail classes due to the high imbalance factor of the entire dataset, while the latter may bring ambiguity in predicting unseen classes. To address these issues, we propose a novel Local and Global Logit Adjustments (LGLA) method that learns experts with full data covering all classes and enlarges the discrepancy among them by elaborated logit adjustments. LGLA consists of two core components: a Class-aware Logit Adjustment (CLA) strategy and an Adaptive Angular Weighted (AAW) loss. The CLA strategy trains multiple experts which excel at each subset using the Local Logit Adjustment (LLA). It also trains one expert specializing in an inversely long-tailed distribution through Global Logit Adjustment (GLA). Moreover, the AAW loss adopts adaptive hard sample mining with respect to different experts to further improve accuracy. Extensive experiments on popular long-tailed benchmarks manifest the superiority of LGLA over the SOTA methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Tao_2023_ICCV, author = {Tao, Yingfan and Sun, Jingna and Yang, Hao and Chen, Li and Wang, Xu and Yang, Wenming and Du, Daniel and Zheng, Min}, title = {Local and Global Logit Adjustments for Long-Tailed Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {11783-11792} }