Large-Scale Cross-Domain Few-Shot Learning

Jiechao Guan, Manli Zhang, Zhiwu Lu; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020


Learning classifiers for novel classes with a few training examples (shots) in a new domain is a practical problem setting. However, the two problems involved in this setting, few-shot learning (FSL) and domain adaption (DA), have only been studied separately so far. In this paper, for the first time, the problem of large-scale cross-domain few-shot learning is tackled. To overcome the dual challenges of few-shot and domain gap, we propose a novel Triplet Autoencoder (TriAE) model. The model aims to learn a latent subspace where not only transfer learning from the source classes to the novel classes occurs, but also domain alignment takes place. An efficient model optimization algorithm is formulated, followed by rigorous theoretical analysis. Extensive experiments on two large-scale cross-domain datasets show that our TriAE model outperforms the state-of-the-art FSL and domain adaptation models, as well as their naive combinations. Interestingly, under the conventional large-scale FSL setting, our TriAE model also outperforms existing FSL methods by significantly margins, indicating that domain gaps are universally present.

Related Material

[pdf] [supp]
@InProceedings{Guan_2020_ACCV, author = {Guan, Jiechao and Zhang, Manli and Lu, Zhiwu}, title = {Large-Scale Cross-Domain Few-Shot Learning}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }