Multi-Modal Score Fusion and Decision Trees for Explainable Automatic Job Candidate Screening From Video CVs

Heysem Kaya, Furkan Gurpinar, Albert Ali Salah; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 1-9

Abstract


We describe an end-to-end system for explainable automatic job candidate screening from video CVs. In this application, audio, face and scene features are first computed from an input video CV, using rich feature sets. These multiple modalities are fed into modality-specific regressors to predict apparent personality traits and a variable that predicts whether the subject will be invited to the interview. The base learners are stacked to an ensemble of decision trees to produce the outputs of the quantitative stage, and a single decision tree, combined with a rule-based algorithm produces interview decision explanations based on the quantitative results. The proposed system in this work ranks first in both quantitative and qualitative stages of the CVPR 2017 ChaLearn Job Candidate Screening Coopetition.

Related Material


[pdf]
[bibtex]
@InProceedings{Kaya_2017_CVPR_Workshops,
author = {Kaya, Heysem and Gurpinar, Furkan and Ali Salah, Albert},
title = {Multi-Modal Score Fusion and Decision Trees for Explainable Automatic Job Candidate Screening From Video CVs},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}