Hidden Layers in Perceptual Learning

Gad Cohen, Daphna Weinshall; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4554-4562

Abstract


Studies in visual perceptual learning investigate the way human performance improves with practice, in the context of relatively simple (and therefore more manageable) visual tasks. Building on the powerful tools currently available for the training of Convolution Neural Networks (CNN), networks whose original architecture was inspired by the visual system, we revisited some of the open computational questions in perceptual learning. We first replicated two representative sets of perceptual learning experiments by training a shallow CNN to perform the relevant tasks. These networks qualitatively showed most of the characteristic behavior observed in perceptual learning, including the hallmark phenomena of specificity and its various manifestations in the forms of transfer or partial transfer, and learning enabling. We next analyzed the dynamics of weight modifications in the networks, identifying patterns which appeared to be instrumental for the transfer (or generalization) of learned skills from one task to another in the simulated networks. These patterns may identify ways by which the domain of search in the parameter space during network re-training can be significantly reduced, thereby accomplishing knowledge transfer.

Related Material


[pdf] [poster] [video]
[bibtex]
@InProceedings{Cohen_2017_CVPR,
author = {Cohen, Gad and Weinshall, Daphna},
title = {Hidden Layers in Perceptual Learning},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}