IDEAS home Printed from https://ideas.repec.org/a/wsi/fracta/v32y2024i09n10ns0218348x25400456.html
   My bibliography  Save this article

Automated Gesture Recognition Using Applied Linguistics With Data-Driven Deep Learning For Arabic Speech Translation

Author

Listed:
  • SAAD ALAHMARI

    (Department of Computer Science, Applied College, Northern Border University, Arar, Saudi Arabia)

  • BADRIYYA B. AL-ONAZI

    (��Department of Arabic Language and Literature, College of Humanities and Social Sciences, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh 11671, Saudi Arabia)

  • NOUF J. ALJOHANI

    (��Department of Language and Translation, University of Jeddah, Jeddah, Saudi Arabia)

  • KHADIJA ABDULLAH ALZAHRANI

    (�Saudi Arabia Ministry of Education, Riyadh, Saudi Arabia)

  • FAIZ ABDULLAH ALOTAIBI

    (�Department of Information Science, College of Humanities and Social Sciences, King Saud University, P. O. Box 28095, Riyadh 11437, Saudi Arabia)

  • MANAR ALMANEA

    (��Department of English, College of Languages and Translation, Imam Mohammad Ibn Saud Islamic University, Riyadh 11432, Saudi Arabia)

  • MRIM M. ALNFIAI

    (*Department of Information Technology, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia)

  • HANY MAHGOUB

    (��†Department of Computer Science, Applied College at Mahayil, King Khalid University, Abha, Asir, Saudi Arabia‡‡Computer Science Department, Faculty of Computers and Information, Menoufia University, Menoufia, Egypt)

Abstract

Gesture recognition for Arabic speech translation includes developing advanced technologies that correctly translate body and hand movements corresponding to Arabic sign language (ArSL) into spoken Arabic. This leverages machine learning and computer vision techniques in complex systems simulation platforms to scrutinize the gestures utilized in ArSL, detecting mild differences in facial expressions, hand shapes, and movements. Sign Language Recognition (SLR) is paramount in assisting communication for the Deaf and Hard-of-Hearing communities. It includes using vision-based methods and Surface Electromyography (sEMG) signals. The sEMG signal is crucial for recognizing hand gestures and capturing muscular activities in sign language. Researchers have comprehensively shown the capability of EMG signals to approach specific details, mainly in classifying hand gestures. This progression is a stimulating feature in extracting the interpretation and recognition of sign languages and investigating the phonology of signed language. Leveraging machine learning algorithms and signal processing techniques in complex systems simulation platforms, researchers aim to extract relevant traits from the sEMG signals that correspond to different ArSL gestures. This study introduces an Enhanced Dwarf Mongoose Algorithm with a Deep Learning-Driven Arabic Sign Language Detection (EDMODL-ASLD) technique on sEMG data. In the initial phase, the presented EDMODL-ASLD model is subjected to data preprocessing to change the input sEMG data into an attuned format. In the next stage, feature extraction with fractal theories is used to gather relevant and nonredundant data from the EMG window to construct a feature vector. In this study, the absolute envelope (AE), energy (E), root-mean square (RMS), standard deviation (STD), and mean absolute value (MAV) are the five time-domain extracted features for the EMG window observation. Meanwhile, the dilated convolutional long short-term memory (ConvLSTM) technique is used to identify distinct sign languages. To improve the results of the dilated ConvLSTM model, the hyperparameter selection process is executed using the EDMO model. To illustrate the significance of the EDMODL-ASLD technique, a brief experimental validation is made on the Arabic SLR database. The experimental validation of the EDMODL-ASLD technique portrayed a superior accuracy value of 96.47% over recent DL approaches.

Suggested Citation

  • Saad Alahmari & Badriyya B. Al-Onazi & Nouf J. Aljohani & Khadija Abdullah Alzahrani & Faiz Abdullah Alotaibi & Manar Almanea & Mrim M. Alnfiai & Hany Mahgoub, 2024. "Automated Gesture Recognition Using Applied Linguistics With Data-Driven Deep Learning For Arabic Speech Translation," FRACTALS (fractals), World Scientific Publishing Co. Pte. Ltd., vol. 32(09n10), pages 1-12.
  • Handle: RePEc:wsi:fracta:v:32:y:2024:i:09n10:n:s0218348x25400456
    DOI: 10.1142/S0218348X25400456
    as

    Download full text from publisher

    File URL: http://www.worldscientific.com/doi/abs/10.1142/S0218348X25400456
    Download Restriction: Access to full text is restricted to subscribers

    File URL: https://libkey.io/10.1142/S0218348X25400456?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wsi:fracta:v:32:y:2024:i:09n10:n:s0218348x25400456. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Tai Tone Lim (email available below). General contact details of provider: https://www.worldscientific.com/worldscinet/fractals .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.