Unsupervised Continual Learning for Gradually Varying Domains

Abu Md Niamul Taufique, Chowdhury Sadman Jahan, Andreas Savakis; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 3740-3750

Abstract


In Unsupervised Domain Adaptation (UDA), a network is trained on a source domain and adapted on a target domain where no labeled data is available. Existing UDA techniques consider having the entire target domain available at once, which may not be feasible during deployment in realistic settings where batches of target data are acquired over time. Continual Learning (CL) has been dealing with data constrained paradigms in a supervised manner, where batches of labeled samples are sequentially presented to the network and the network continually learns from the new data without forgetting what was previously learned. Our method for unsupervised continual learning serves as a bridge between the UDA and CL paradigms. This research addresses a gradually evolving target domain fragmented into multiple sequential batches where the model continually adapts to the gradually varying stream of data in an unsupervised manner. To tackle this challenge, we propose a source free method based on episodic memory replay with buffer management. A contrastive loss is incorporated for better alignment of the buffer samples and the continual stream of batches. Our experiments on the rotating MNIST and CORe50 datasets confirm the benefits of our unsupervised continual learning method for gradually varying domains. The codes are available at https://github.com/abutaufique/ucl-gv.git.

Related Material


[pdf]
[bibtex]
@InProceedings{Taufique_2022_CVPR, author = {Taufique, Abu Md Niamul and Jahan, Chowdhury Sadman and Savakis, Andreas}, title = {Unsupervised Continual Learning for Gradually Varying Domains}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {3740-3750} }