Dense Human Body Correspondences Using Convolutional Networks

Lingyu Wei, Qixing Huang, Duygu Ceylan, Etienne Vouga, Hao Li; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1544-1553


We propose a deep learning approach for finding dense correspondences between 3D scans of people. Our method requires only partial geometric information in the form of two depth maps or partial reconstructed surfaces, works for humans in arbitrary poses and wearing any clothing, does not require the two people to be scanned from similar viewpoints, and runs in real time. We use a deep convolutional neural network to train a feature descriptor on depth map pixels, but crucially, rather than training the network to solve the shape correspondence problem directly, we train it to solve a body region classification problem, modified to increase the smoothness of the learned descriptors near region boundaries. This approach ensures that nearby points on the human body are nearby in feature space, and vice versa, rendering the feature descriptor suitable for computing dense correspondences between the scans. We validate our method on real and synthetic data for both clothed and unclothed humans, and show that our correspondences are more robust than is possible with state-of-the-art unsupervised methods, and more accurate to those found using methods that require full watertight 3D geometry.

Related Material

author = {Wei, Lingyu and Huang, Qixing and Ceylan, Duygu and Vouga, Etienne and Li, Hao},
title = {Dense Human Body Correspondences Using Convolutional Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}