Image2Audio: Facilitating Semi-Supervised Audio Emotion Recognition With Facial Expression Image

Gewen He, Xiaofeng Liu, Fangfang Fan, Jane You; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 912-913

Abstract


There is a large amount of public available labeled image-based facial expression recognition datasets. How could these images help for the audio emotion recognition with limited labeled data according to their inherent correlations can be a meaningful and challenging task. In this paper, we propose a semi-supervised adversarial network that allows the knowledge transfer from the labeled videos to the heterogeneous labeled audio domain hence enhancing the audio emotion recognition performance. Specifically, face image samples are translated to the spectrograms class-wisely. To harness the translated samples in a sparsely distributed area and construct a tighter decision boundary, we propose to precisely estimate the density on feature space and incorporate the reliable low-density sample with an annealing scheme. Moreover, the unlabeled audios are collected with the high-density path in a graph representation. As a possible "recognition via generation" framework, we empirically demonstrated its effectiveness on several audio emotional recognition benchmarks.

Related Material


[pdf]
[bibtex]
@InProceedings{He_2020_CVPR_Workshops,
author = {He, Gewen and Liu, Xiaofeng and Fan, Fangfang and You, Jane},
title = {Image2Audio: Facilitating Semi-Supervised Audio Emotion Recognition With Facial Expression Image},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}