A Comparative Study of Continuous Sign Language Recognition Techniques

Sarah Alyami, Hamzah Luqman; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2025, pp. 4923-4932

Abstract


Continuous Sign Language Recognition (CSLR) focuses on interpreting sequences of sign language gestures performed continuously without pauses. Despite significant advancements in CSLR, most methods have not been rigorously tested in signer-independent and unseen-sentence evaluation protocols, which poses a critical limitation for their real-world applicability. To address this gap, we present a comprehensive evaluation of seven prominent RGB-based CSLR models under rigorous conditions, including evaluations with unseen signers and novel sentences. We benchmark these models across widely used datasets (RWTH-PHOENIX-Weather-2014, CSL-Daily) as well as underexplored sign language datasets (ArabSign, GrSL). The experiments establish new benchmarks for CSLR and provide valuable insights into the robustness and generalization of CSLR models across different sign languages. This study also identifies the strengths and limitations of the evaluated techniques, which highlight the need for more inclusive and realistic evaluation protocols to advance CSLR for practical applications. The study's findings highlight a fundamental trade-off between generalizing to novel sentences and adapting to signer variability. We further analyze model efficiency, noting that certain models require roughly double the training time of others on large datasets, which emphasizes the importance of optimizing both accuracy and computational cost for practical deployment.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Alyami_2025_ICCV, author = {Alyami, Sarah and Luqman, Hamzah}, title = {A Comparative Study of Continuous Sign Language Recognition Techniques}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {4923-4932} }