DECA: Deep Viewpoint-Equivariant Human Pose Estimation Using Capsule Autoencoders

Nicola Garau, Niccolò Bisagno, Piotr Bródka, Nicola Conci; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11677-11686

Abstract


Human Pose Estimation (HPE) aims at retrieving the 3D position of human joints from images or videos. We show that current 3D HPE methods suffer a lack of viewpoint equivariance, namely they tend to fail or perform poorly when dealing with viewpoints unseen at training time. Deep learning methods often rely on either scale-invariant, translation-invariant, or rotation-invariant operations, such as max-pooling. However, the adoption of such procedures does not necessarily improve viewpoint generalization, rather leading to more data-dependent methods. To tackle this issue, we propose a novel capsule autoencoder network with fast Variational Bayes capsule routing, named DECA. By modeling each joint as a capsule entity, combined with the routing algorithm, our approach can preserve the joints' hierarchical and geometrical structure in the feature space, independently from the viewpoint. By achieving viewpoint equivariance, we drastically reduce the network data dependency at training time, resulting in an improved ability to generalize for unseen viewpoints. In the experimental validation, we outperform other methods on depth images from both seen and unseen viewpoints, both top-view, and front-view. In the RGB domain, the same network gives state-of-the-art results on the challenging viewpoint transfer task, also establishing a new framework for top-view HPE.

Related Material


[pdf]
[bibtex]
@InProceedings{Garau_2021_ICCV, author = {Garau, Nicola and Bisagno, Niccol\`o and Br\'odka, Piotr and Conci, Nicola}, title = {DECA: Deep Viewpoint-Equivariant Human Pose Estimation Using Capsule Autoencoders}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11677-11686} }