Deep Clustering by Gaussian Mixture Variational Autoencoders With Graph Embedding

Linxiao Yang, Ngai-Man Cheung, Jiaying Li, Jun Fang; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 6440-6449


We propose DGG: D eep clustering via a G aussian-mixture variational autoencoder (VAE) with G raph embedding. To facilitate clustering, we apply Gaussian mixture model (GMM) as the prior in VAE. To handle data with complex spread, we apply graph embedding. Our idea is that graph information which captures local data structures is an excellent complement to deep GMM. Combining them facilitates the network to learn powerful representations that follow global model and local structural constraints. Therefore, our method unifies model-based and similarity-based approaches for clustering. To combine graph embedding with probabilistic deep GMM, we propose a novel stochastic extension of graph embedding: we treat samples as nodes on a graph and minimize the weighted distance between their posterior distributions. We apply Jenson-Shannon divergence as the distance. We combine the divergence minimization with the log-likelihood maximization of the deep GMM. We derive formulations to obtain an unified objective that enables simultaneous deep representation learning and clustering. Our experimental results show that our proposed DGG outperforms recent deep Gaussian mixture methods (model-based) and deep spectral clustering (similarity-based). Our results highlight advantages of combining model-based and similarity-based clustering as proposed in this work. Our code is published here:

Related Material

[pdf] [supp]
author = {Yang, Linxiao and Cheung, Ngai-Man and Li, Jiaying and Fang, Jun},
title = {Deep Clustering by Gaussian Mixture Variational Autoencoders With Graph Embedding},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}