An Analysis of Best-practice Strategies for Replay and Rehearsal in Continual Learning

Alexander Krawczyk, Alexander Gepperth; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4196-4204

Abstract


This study is in the context of class-incremental continual learning using replay which has seen notable progress in recent years fueled by concepts like conditional latent or maximally interfered replay. However there are many design choices to take when it comes to implementing replay strategies with potentially very different outcomes in the various class-incremental scenarios. Some of the obvious design choices in replay are the use of experience replay (ER) the use of different generators like GANs --vs-- VAEs for generative replay or whether to re-initialize generators after each task. For replay strategies in general it is an open question how many samples to generate for each new task and what weights to give generated and new samples in the loss. On top of this there are many possible CL evaluation protocols differing in the amount of tasks the balancing of tasks or fundamental complexity (e.g. MNIST -vs- latent CIFAR/SVHN) and thus few generic conclusions about best practices for replay/rehearsal have found consensus in the literature. This study aims at establishing such best-practices by conducting an extensive set of representative replay experiments.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Krawczyk_2024_CVPR, author = {Krawczyk, Alexander and Gepperth, Alexander}, title = {An Analysis of Best-practice Strategies for Replay and Rehearsal in Continual Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4196-4204} }