What Is Happening Inside a Continual Learning Model? A Representation-Based Evaluation of Representational Forgetting

Kengo Murata, Tetsuya Toyota, Kouzou Ohara; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 234-235

Abstract


Recently, many continual learning methods have been proposed, and their performance is usually evaluated based on their final output such as the class they predicted. However, this output-based evaluation cannot tell us anything about how representations the model learned from given tasks are forgotten during learning process inside the model although understanding it is important to devise a robust algorithm to catastrophic forgetting that is an intrinsic problem in continual learning. In this work, we propose a representation-based evaluation framework and demonstrate it can help us better understand the representational forgetting through intensive experiments on three benchmark datasets, which eventually brought us the following findings: 1) non-negligible amount of representational forgetting appears at shallow layers of a deep neural network model, and 2) which tasks are more accurately learned when representational forgetting occurred depends on the depth of the layer at which the representational forgetting is observed.

Related Material


[pdf]
[bibtex]
@InProceedings{Murata_2020_CVPR_Workshops,
author = {Murata, Kengo and Toyota, Tetsuya and Ohara, Kouzou},
title = {What Is Happening Inside a Continual Learning Model? A Representation-Based Evaluation of Representational Forgetting},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}