Self-Supervised Learning of Pose Embeddings From Spatiotemporal Relations in Videos

Omer Sumer, Tobias Dencker, Bjorn Ommer; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4298-4307

Abstract


Human pose analysis is presently dominated by deep convolutional networks trained with extensive manual annotations of joint locations and beyond. To avoid the need for expensive labeling, we exploit spatiotemporal relations in training videos for self-supervised learning of pose embeddings. The key idea is to combine temporal ordering and spatial placement estimation as auxiliary tasks for learning pose similarities in a Siamese convolutional network. Since the self-supervised sampling of both tasks from natural videos can result in ambiguous and incorrect training labels, our method employs a curriculum learning idea that starts training with the most reliable data samples and gradually increases the difficulty. To further refine the training process we mine repetitive poses in individual videos which provide reliable labels while removing inconsistencies. Our pose embeddings capture visual characteristics of human pose that can boost existing supervised representations in human pose estimation and retrieval. We report quantitative and qualitative results on these tasks in Olympic Sports, Leeds Pose Sports and MPII Human Pose datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Sumer_2017_ICCV,
author = {Sumer, Omer and Dencker, Tobias and Ommer, Bjorn},
title = {Self-Supervised Learning of Pose Embeddings From Spatiotemporal Relations in Videos},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}