-
[pdf]
[arXiv]
[bibtex]@InProceedings{Crum_2025_WACV, author = {Crum, Colton R. and Czajka, Adam}, title = {MENTOR: Human Perception-Guided Pretraining for Increased Generalization}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {7470-7479} }
MENTOR: Human Perception-Guided Pretraining for Increased Generalization
Abstract
Leveraging human perception into training of convolutional neural networks (CNN) has boosted generalization capabilities of such models in open-set recognition tasks. One of the active research questions is where (in the model architecture or training pipeline) and how to efficiently incorporate always-limited human perceptual data into training strategies of models. In this paper we introduce MENTOR (huMan pErceptioN-guided preTraining fOr increased geneRalization) which addresses this question through two unique rounds of training CNNs tasked with open-set anomaly detection. First we train an autoencoder to learn human saliency maps given an input image without any class labels. The autoencoder is thus tasked with discovering domain-specific salient features which mimic human perception. Second we remove the decoder part add a classification layer on top of the encoder and train this new model conventionally now using class labels. We show that MENTOR successfully raises the generalization performance across three different CNN backbones in a variety of anomaly detection tasks (demonstrated for detection of unknown iris presentation attacks synthetically-generated faces and anomalies in chest X-ray images) compared to traditional pretraining methods (e.g. sourcing the weights from ImageNet) and as well as state-of-the-art methods that incorporate human perception guidance into training. In addition we demonstrate that MENTOR can be flexibly applied to existing human perception-guided methods and subsequently increasing their generalization with no architectural modifications.
Related Material