Author
Listed:
- Wenfeng Yang
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
- Pengyi Li
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
- Wei Yang
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
- Yuxing Liu
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
- Yulong He
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
- Ovanes Petrosian
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
- Aleksandr Davydenko
(Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University, 198504 Saint Petersburg, Russia)
Abstract
Automatic speech recognition (ASR) that relies on audio input suffers from significant degradation in noisy conditions and is particularly vulnerable to speech interference. However, video recordings of speech capture both visual and audio signals, providing a potent source of information for training speech models. Audiovisual speech recognition (AVSR) systems enhance the robustness of ASR by incorporating visual information from lip movements and associated sound production in addition to the auditory input. There are many audiovisual speech recognition models and systems for speech transcription, but most of them have been tested based in a single experimental setting and with a limited dataset. However, a good model should be applicable to any scenario. Our main contributions are: (i) Reproducing the three best-performing audiovisual speech recognition models in the current AVSR research area using the most famous audiovisual databases, LSR2 (Lip Reading Sentences 2) LSR3 (Lip Reading Sentences 3), and comparing and analyzing their performances under various noise conditions. (ii) Based on our experimental and research experiences, we analyzed the problems currently encountered in the AVSR domain, which are summarized as the feature-extraction problem and the domain-generalization problem. (iii) According to the experimental results, the Moco (momentum contrast) + word2vec (word to vector) model has the best AVSR effect on the LRS datasets regardless of whether there is noise or not. Additionally, the model also produced the best experimental results in the experiments of audio recognition and video recognition. Our research lays the foundation for further improving the performance of AVSR models.
Suggested Citation
Wenfeng Yang & Pengyi Li & Wei Yang & Yuxing Liu & Yulong He & Ovanes Petrosian & Aleksandr Davydenko, 2023.
"Research on Robust Audio-Visual Speech Recognition Algorithms,"
Mathematics, MDPI, vol. 11(7), pages 1-16, April.
Handle:
RePEc:gam:jmathe:v:11:y:2023:i:7:p:1733-:d:1116280
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:7:p:1733-:d:1116280. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.