Generating Accurate Pseudo Examples for Continual Learning

Daniel L. Silver, Sazia Mahfuz; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 256-257

Abstract


Continual learning (CL) is concerned with the persistent and cumulative nature of learning. This requires a method of successfully consolidating new knowledge into long-term memory without the loss of prior knowledge. Prior research has addressed this CL retention problem through the efficient rehearsal of prior examples while learning the examples of a new task within a long-term Multiple Task Learning (MTL) network. The approach maintains or improves prior knowledge while allowing its representation to remain plastic for the integration of new task examples. Preferably, rehearsal is done using pseudo examples synthesized by the MTL network; eliminating the need to retain prior task training examples or a generate them with an additional model. Previous work has shown that to properly retain knowledge the pseudo examples must adhere to the input probability distribution of those original examples. Two approaches are investigated for creating appropriate pseudo examples from a Restricted Boltzmann Machine (RBM) autoencoder, which can reside in the lowest layers of the long-term MTL Deep Belief network. We show that appropriate pseudo examples can be reconstructed by passing uniform random examples to a generative RBM model and selecting only those with reconstruction error less than the mean training error. These pseudo examples are shown to adhere to the probability distribution of the input variables of the original training examples and retain prior task knowledge during rehearsal as well as those examples. As part of the research, we develop and test a new metric called the Autoencoder Divergence Measure for comparing the probability distributions of two datasets given to a generative RBM network based on their reconstruction mean squared error.

Related Material


[pdf]
[bibtex]
@InProceedings{Silver_2020_CVPR_Workshops,
author = {Silver, Daniel L. and Mahfuz, Sazia},
title = {Generating Accurate Pseudo Examples for Continual Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}