Learning Semantically Meaningful Embeddings Using Linear Constraints

Shuyu Lin, Bo Yang, Robert Birke, Ronald Clark; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 53-56

Abstract


Learning an interpretable representation is an essential task in machine learning, as many fields, such as legislation and healthcare, require explainability in the decision-making process where costly consequences can be easily incurred. In this paper, we propose a simple embedding learning method that jointly optimises for an auto-encoding reconstruction task and for estimating the corresponding attribute labels associated with the raw data. We restrict the attribute estimation model to be linear, constraining the learnt embedding space to be close to the interpretable attribute space. As a result, we are able to interpret the learnt embedding as a mixture of different attributes, i.e. semantic information has been embedded in the latent representation. Furthermore, as the linear mapping is fully invertible, we are able to generate any data samples from a list of specified attributes.

Related Material


[pdf]
[bibtex]
@InProceedings{Lin_2019_CVPR_Workshops,
author = {Lin, Shuyu and Yang, Bo and Birke, Robert and Clark, Ronald},
title = {Learning Semantically Meaningful Embeddings Using Linear Constraints},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}