SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition

Hezhen Hu, Weichao Zhao, Wengang Zhou, Yuechen Wang, Houqiang Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11087-11096

Abstract


Hand gesture serves as a critical role in sign language. Current deep-learning-based sign language recognition (SLR) methods may suffer insufficient interpretability and overfitting due to limited sign data sources. In this paper, we introduce the first self-supervised pre-trainable SignBERT with incorporated hand prior for SLR. SignBERT views the hand pose as a visual token, which is derived from an off-the-shelf pose extractor. The visual tokens are then embedded with gesture state, temporal and hand chirality information. To take full advantage of available sign data sources, SignBERT first performs self-supervised pre-training by masking and reconstructing visual tokens. Jointly with several mask modeling strategies, we attempt to incorporate hand prior in a model-aware method to better model hierarchical context over the hand sequence. Then with the prediction head added, SignBERT is fine-tuned to perform the downstream SLR task. To validate the effectiveness of our method on SLR, we perform extensive experiments on four public benchmark datasets, i.e., NMFs-CSL, SLR500, MSASL and WLASL. Experiment results demonstrate the effectiveness of both self-supervised learning and imported hand prior. Furthermore, we achieve state-of-the-art performance on all benchmarks with a notable gain.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Hu_2021_ICCV, author = {Hu, Hezhen and Zhao, Weichao and Zhou, Wengang and Wang, Yuechen and Li, Houqiang}, title = {SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11087-11096} }