Unsupervised Representation Learning by Sorting Sequences

Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, Ming-Hsuan Yang; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 667-676

Abstract


We present an unsupervised representation learning approach using videos without semantic labels. We leverage the temporal coherence as a supervisory signal by formulating representation learning as a sequence sorting task. We take temporally shuffled frames (i.e. in non-chronological order) as inputs and train a convolutional neural network to sort the shuffled sequences. Similar to comparison-based sorting algorithms, we propose to extract features from all frame pairs and aggregate them to predict the correct order. As sorting shuffled image sequence requires an understanding of the statistical temporal structure of images, training with such a proxy task allows us to learn rich and generalizable visual representation. We validate the effectiveness of the learned representation using our method as pre-training on high-level recognition problems. The experimental results show that our method compares favorably against state-of-the-art methods on action recognition, image classification and object detection tasks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lee_2017_ICCV,
author = {Lee, Hsin-Ying and Huang, Jia-Bin and Singh, Maneesh and Yang, Ming-Hsuan},
title = {Unsupervised Representation Learning by Sorting Sequences},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}