Simultaneous Deep Transfer Across Domains and Tasks

Eric Tzeng, Judy Hoffman, Trevor Darrell, Kate Saenko; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 4068-4076

Abstract


Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.

Related Material


[pdf]
[bibtex]
@InProceedings{Tzeng_2015_ICCV,
author = {Tzeng, Eric and Hoffman, Judy and Darrell, Trevor and Saenko, Kate},
title = {Simultaneous Deep Transfer Across Domains and Tasks},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}