Incremental Learning of Object Detectors Without Catastrophic Forgetting
Konstantin Shmelkov, Cordelia Schmid, Karteek Alahari; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3400-3409
Abstract
Despite their success for object detection, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original model trained on a set of classes to additionally detect objects of new classes, in the absence of the initial training data. They suffer from "catastrophic forgetting" - an abrupt degradation of performance on the original set of classes, when the training objective is adapted to the new classes. We present a method to address this issue, and learn object detectors incrementally, when neither the original training data nor annotations for the original classes in the new training set are available. The core of our proposed solution is a loss function to balance the interplay between predictions on the new classes and a new distillation loss which minimizes the discrepancy between responses for old classes from the original and the updated networks. This incremental learning can be performed multiple times, for a new set of classes in each step, with a moderate drop in performance compared to the baseline network trained on the ensemble of data. We present object detection results on the PASCAL VOC 2007 and COCO datasets, along with a detailed empirical analysis of the approach.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Shmelkov_2017_ICCV,
author = {Shmelkov, Konstantin and Schmid, Cordelia and Alahari, Karteek},
title = {Incremental Learning of Object Detectors Without Catastrophic Forgetting},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}