Image to Video Domain Adaptation Using Web Supervision

Andrew Kae, Yale Song; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 567-575

Abstract


Training deep neural networks typically requires large amounts of labeled data which may be scarce or expensive to obtain for a particular target domain. As an alternative, we can leverage webly-supervised data (i.e. results from a public search engine) which are relatively plentiful but may contain noisy results. In this work, we propose a novel two-stage approach to learn a video classifier using webly-supervised data. We argue that learning appearance features and temporal features sequentially, rather than jointly, is an easier optimization for this task. We show this by first learning an image model from web images, which is used to initialize and train a video model. Our model applies domain adaptation to account for potential domain shift present between the source domain (webly-supervised data) and target domain, and also accounts for noise by adding a novel attention component. We report results competitive with state-of-the-art for webly-supervised approaches (while simplifying the training process) on UCF-101 and also evaluate on Kinetics for comparison.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Kae_2020_WACV,
author = {Kae, Andrew and Song, Yale},
title = {Image to Video Domain Adaptation Using Web Supervision},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}