PanopTOP: A Framework for Generating Viewpoint-Invariant Human Pose Estimation Datasets

Nicola Garau, Giulia Martinelli, Piotr Bródka, Niccolò Bisagno, Nicola Conci; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 234-242

Abstract


Human pose estimation (HPE) from RGB and depth images has recently experienced a push for viewpoint-invariant and scale-invariant pose retrieval methods. In fact, current methods fail to generalise to unconventional viewpoints due to the lack of viewpoint-invariant data at training time. Existing datasets do not provide multiple-viewpoint observations, and mostly focus on frontal views. In this work, we introduce PanopTOP, a fully automatic framework for the generation of semi-synthetic RGB and depth samples with 2D and 3D ground truth of pedestrian poses from multiple arbitrary viewpoints. Starting from the Panoptic Dataset, we use the PanopTOP framework to generate the PanopTOP31K dataset, consisting of 31K images from 23 different subjects recorded from diverse and challenging viewpoints, also including the top-view. Finally, we provide baseline results and cross-validation tests for our dataset, demonstrating how it is possible to generalise from the semi-synthetic to the real world domain. The dataset and the code will be made publicly available upon acceptance.

Related Material


[pdf]
[bibtex]
@InProceedings{Garau_2021_ICCV, author = {Garau, Nicola and Martinelli, Giulia and Br\'odka, Piotr and Bisagno, Niccol\`o and Conci, Nicola}, title = {PanopTOP: A Framework for Generating Viewpoint-Invariant Human Pose Estimation Datasets}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {234-242} }