IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i6p909-d1608156.html
   My bibliography  Save this article

Adaptive Transformer-Based Deep Learning Framework for Continuous Sign Language Recognition and Translation

Author

Listed:
  • Yahia Said

    (Center for Scientific Research and Entrepreneurship, Northern Border University, Arar 73213, Saudi Arabia
    King Salman Center for Disability Research, Riyadh 11614, Saudi Arabia)

  • Sahbi Boubaker

    (Department of Computer & Network Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah 21959, Saudi Arabia)

  • Saleh M. Altowaijri

    (Department of Information Systems, Faculty of Computing and Information Technology, Northern Border University, Rafha 91911, Saudi Arabia)

  • Ahmed A. Alsheikhy

    (Department of Electrical Engineering, College of Engineering, Northern Border University, Arar 91431, Saudi Arabia)

  • Mohamed Atri

    (College of Computer Sciences, King Khalid University, Abha 62529, Saudi Arabia)

Abstract

Sign language recognition and translation remain pivotal for facilitating communication among the deaf and hearing communities. However, end-to-end sign language translation (SLT) faces major challenges, including weak temporal correspondence between sign language (SL) video frames and gloss annotations and the complexity of sequence alignment between long SL videos and natural language sentences. In this paper, we propose an Adaptive Transformer (ADTR)-based deep learning framework that enhances SL video processing for robust and efficient SLT. The proposed model incorporates three novel modules: Adaptive Masking (AM), Local Clip Self-Attention (LCSA), and Adaptive Fusion (AF) to optimize feature representation. The AM module dynamically removes redundant video frame representations, improving temporal alignment, while the LCSA module learns hierarchical representations at both local clip and full-video levels using a refined self-attention mechanism. Additionally, the AF module fuses multi-scale temporal and spatial features to enhance model robustness. Unlike conventional SLT models, our framework eliminates the reliance on gloss annotations, enabling direct translation from SL video sequences to spoken language text. The proposed method was evaluated using the ArabSign dataset, demonstrating state-of-the-art performance in translation accuracy, processing efficiency, and real-time applicability. The achieved results confirm that ADTR is a highly effective and scalable deep learning solution for continuous sign language recognition, positioning it as a promising AI-driven approach for real-world assistive applications.

Suggested Citation

  • Yahia Said & Sahbi Boubaker & Saleh M. Altowaijri & Ahmed A. Alsheikhy & Mohamed Atri, 2025. "Adaptive Transformer-Based Deep Learning Framework for Continuous Sign Language Recognition and Translation," Mathematics, MDPI, vol. 13(6), pages 1-23, March.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:6:p:909-:d:1608156
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/6/909/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/6/909/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:6:p:909-:d:1608156. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.