Continual Adaptation of Visual Representations via Domain Randomization and Meta-Learning

Riccardo Volpi, Diane Larlus, Gregory Rogez; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 4443-4453

Abstract


Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature -- the well-known "catastrophic forgetting" issue. In particular, when a model consecutively learns from different visual domains, it tends to forget the past domains in favor of the most recent ones. In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization -- for vision tasks, randomizing the current domain's distribution with heavy image manipulations. Building on this result, we devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains, while also easing adaptation to them. Such meta-domains are also generated through randomized image manipulations. We empirically demonstrate in a variety of experiments -- spanning from classification to semantic segmentation -- that our approach results in models that are less prone to catastrophic forgetting when transferred to new domains.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Volpi_2021_CVPR, author = {Volpi, Riccardo and Larlus, Diane and Rogez, Gregory}, title = {Continual Adaptation of Visual Representations via Domain Randomization and Meta-Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {4443-4453} }