Recognizing Actions in Videos From Unseen Viewpoints

AJ Piergiovanni, Michael S. Ryoo; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 4124-4132

Abstract


Standard methods for video recognition use large CNNs designed to capture spatio-temporal data. However, training these models requires a large amount of labeled training data, containing a wide variety of actions, scenes, settings and camera viewpoints. In this paper, we show that current convolutional neural network models are unable to recognize actions from camera viewpoints not present in their training data (i.e., unseen view action recognition). To address this, we develop approaches based on 3D pose and introduce a new geometric convolutional layer that can learn viewpoint invariant representations. Further, we introduce a new, challenging dataset for unseen view recognition and show the approaches ability to learn viewpoint invariant representations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Piergiovanni_2021_CVPR, author = {Piergiovanni, AJ and Ryoo, Michael S.}, title = {Recognizing Actions in Videos From Unseen Viewpoints}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {4124-4132} }