Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction

Richard Zhang, Phillip Isola, Alexei A. Efros; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1058-1067

Abstract


We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task -- predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Zhang_2017_CVPR,
author = {Zhang, Richard and Isola, Phillip and Efros, Alexei A.},
title = {Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}