ILCOC: An Incremental Learning Framework Based on Contrastive One-Class Classifiers
In the class incremental learning, the number of classes to be handled dynamically raises with the number of considered tasks. The main challenge of this learning schema is catastrophic forgetting, that is the performance degradation on old tasks after learning new tasks. Existing incremental learning algorithms generally choose to train a multi-class classifier (e.g. softmax classifier), which learns a decision boundary to divide the feature space into several parts. Therefore, when new data arrive, the learned boundary will be updated and thus may cause forgetting. Compared with multi-class classifiers, a one-class classifier focuses on characterizing the distribution of a single class. As a result, the decision boundary learned for each category is tighter and does not change during learning new tasks. Inspired by this characteristic of one-class classifier, we propose a novel incremental learning framework based on contrastive one-class classifiers (ILCOC) to avoid catastrophic forgetting. Specifically, we train a specific one-class classifier for each category and parallelly use them to achieve incremental multi-class recognition. Besides, we design a scale-boundary loss, a classifier-contrastive loss and a negative-suppression loss to strengthen the comparability of classifiers outputs and the discrimination ability of each one-class classifier. We evaluate ILCOC on MNIST, CIFAR-10 and Tiny-ImageNet datasets, and the experimental results show that ILCOC achieves state-of-the-art performance.