Variational Autoencoders for Deforming 3D Mesh Models

Qingyang Tan, Lin Gao, Yu-Kun Lai, Shihong Xia; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5841-5850

Abstract


3D geometric contents are becoming increasingly popular. In this paper, we study the problem of analyzing deforming 3D meshes using deep neural networks. Deforming 3D meshes are flexible to represent 3D animation sequences as well as collections of objects of the same category, allowing diverse shapes with large-scale non-linear deformations. We propose a novel framework which we call mesh variational autoencoders (mesh VAE), to explore the probabilistic latent space of 3D surfaces. The framework is easy to train, and requires very few training examples. We also propose an extended model which allows flexibly adjusting the significance of different latent variables by altering the prior distribution. Extensive experiments demonstrate that our general framework is able to learn a reasonable representation for a collection of deformable shapes, and produce competitive results for a variety of applications, including shape generation, shape interpolation, shape space embedding and shape exploration, outperforming state-of-the-art methods.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Tan_2018_CVPR,
author = {Tan, Qingyang and Gao, Lin and Lai, Yu-Kun and Xia, Shihong},
title = {Variational Autoencoders for Deforming 3D Mesh Models},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}