-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Hwang_2025_WACV, author = {Hwang, Eui Jun and Cho, Sukmin and Lee, Huije and Yoon, Youngwoo and Park, Jong C.}, title = {A Spatio-Temporal Representation Learning as an Alternative to Traditional Glosses in Sign Language Translation and Production}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3352-3362} }
A Spatio-Temporal Representation Learning as an Alternative to Traditional Glosses in Sign Language Translation and Production
Abstract
This work addresses the challenges associated with the use of glosses in both Sign Language Translation (SLT) and Sign Language Production (SLP). While the glosses have long been used as a bridge between sign language and spoken language they come with two major limitations that impede the advancement of sign language systems. First annotating the glosses is a labor-intensive and time-consuming process which limits the scalability of datasets. Second the glosses oversimplify sign language by stripping away its spatio-temporal dynamics reducing complex signs to basic labels and missing the subtle movements essential for precise interpretation. To address these limitations we introduce Universal Gloss-level Representation (UniGloR) a framework designed to capture the spatio-temporal features inherent in sign language providing a more dynamic and detailed alternative to the use of the glosses. The core idea of UniGloR is simple yet effective: We derive dense spatio-temporal representations from sign keypoint sequences using self-supervised learning and seamlessly integrate them into SLT and SLP tasks. Our experiments in a keypoint-based setting demonstrate that UniGloR either outperforms or matches the performance of previous SLT and SLP methods on two widely-used datasets: PHOENIX14T and How2Sign. Code is available at https://github.com/eddie-euijun-hwang/UniGloR.
Related Material