Convolutional Sequence Generation for Skeleton-Based Action Synthesis

Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan, Dahua Lin; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4394-4402

Abstract


In this work, we aim to generate long actions represented as sequences of skeletons. The generated sequences must demonstrate continuous, meaningful human actions, while maintaining coherence among body parts. Instead of generating skeletons sequentially following an autoregressive model, we propose a framework that generates the entire sequence altogether by transforming from a sequence of latent vectors sampled from a Gaussian process (GP). This framework, named Convolutional Sequence Generation Network (CSGN), jointly models structures in temporal and spatial dimensions. It captures the temporal structure at multiple scales through the GP prior and the temporal convolutions; and establishes the spatial connection between the latent vectors and the skeleton graphs via a novel graph refining scheme. It is noteworthy that CSGN allows bidirectional transforms between the latent and the observed spaces, thus enabling semantic manipulation of the action sequences in various forms. We conducted empirical studies on multiple datasets, including a set of high-quality dancing sequences collected by us. The results show that our framework can produce long action sequences that are coherent across time steps and among body parts.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yan_2019_ICCV,
author = {Yan, Sijie and Li, Zhizhong and Xiong, Yuanjun and Yan, Huahan and Lin, Dahua},
title = {Convolutional Sequence Generation for Skeleton-Based Action Synthesis},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}