Federated Class Incremental Learning: A Pseudo Feature Based Approach Without Exemplars

Min Kyoon Yoo, Yu Rang Park; Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 488-498

Abstract


Federated learning often assumes that data is fixed in advance, which is unrealistic in many real-world scenarios where new data continuously arrives, causing catastrophic forgetting. To address this challenge, we propose FCLPF (Federated Class Incremental Learning with Pseudo Features), a method that uses pseudo features generated from prototypes to mitigate catastrophic forgetting. Our approach reduces communication costs and improves efficiency by eliminating the need for past data and avoiding computationally heavy models like GANs. Experimental results on CIFAR-100 show that FCLPF achieves an average accuracy of 51.87 % and an average forgetting of 9.62 %, significantly outperforming existing baselines with an average accuracy of 47.72 % and forgetting of 20.46 %. On TinyImageNet, FCLPF achieves 37.56 % accuracy and 3.14 % forgetting, compared to the baselines' 27.69 % accuracy and 24.46 % forgetting, demonstrating the superior performance of FCLPF.

Related Material


[pdf]
[bibtex]
@InProceedings{Yoo_2024_ACCV, author = {Yoo, Min Kyoon and Park, Yu Rang}, title = {Federated Class Incremental Learning: A Pseudo Feature Based Approach Without Exemplars}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {488-498} }