Learning Latent Structural Relations With Message Passing Prior

Shaogang Ren, Hongliang Fei, Dingcheng Li, Ping Li; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5334-5343

Abstract


Learning disentangled representations is an important topic in machine learning with a wide range of applications. Disentangled latent variables represent interpretable semantic information and reflect separate factors of variation in data. Although generative models can learn latent representations as well, most existing models ignore the structural information among latent variables. In this paper, we propose a novel approach to learn the disentangled latent structural representations from data using decomposable variational auto-encoders. We design a novel message passing prior to the latent representations to capture the interactions among different data components. Different from many previous methods that ignore data component or object interaction, our approach simultaneously learns component representation and encodes component relationships. We have applied our model to tasks of data segmentation and latent representation learning among different data components. Experiments on several benchmarks demonstrate the utility of the proposed method.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ren_2023_WACV, author = {Ren, Shaogang and Fei, Hongliang and Li, Dingcheng and Li, Ping}, title = {Learning Latent Structural Relations With Message Passing Prior}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {5334-5343} }