Simultaneous Segmentation and Recognition: Towards More Accurate Ego Gesture Recognition

Tejo Chalasani, Aljosa Smolic; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Ego hand gestures can be used as an interface in AR and VR environments. While the context of an image is important for tasks like scene understanding, object recognition, image caption generation and activity recognition, it plays a minimal role in ego hand gesture recognition. An ego hand gesture used for AR and VR environments conveys the same information regardless of the background. With this idea in mind, we present our work on ego hand gesture recognition that produces embeddings from RBG images with ego hands, which are simultaneously used for ego hand segmentation and ego gesture recognition. To this extent, we achieved better recognition accuracy (96.9%) compared to the state of the art (92.2%) on the biggest ego hand gesture dataset available publicly. We present a gesture recognition deep neural network which recognises ego hand gestures from videos (videos containing a single gesture) by generating and recognising embeddings of ego hands from image sequences of varying length. We introduce the concept of simultaneous segmentation and recognition applied to ego hand gestures, present the network architecture, the training procedure and the results compared to the state of the art on the EgoGesture dataset

Related Material


[pdf]
[bibtex]
@InProceedings{Chalasani_2019_ICCV,
author = {Chalasani, Tejo and Smolic, Aljosa},
title = {Simultaneous Segmentation and Recognition: Towards More Accurate Ego Gesture Recognition},
booktitle = {The IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}