IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v13y2022i1d10.1038_s41467-022-32231-1.html
   My bibliography  Save this article

Pushing the limits of remote RF sensing by reading lips under the face mask

Author

Listed:
  • Hira Hameed

    (University of Glasgow, James Watt School of Engineering)

  • Muhammad Usman

    (University of Glasgow, James Watt School of Engineering
    School of Computing, Engineering and Built Environment, Glasgow Caledonian University)

  • Ahsen Tahir

    (University of Glasgow, James Watt School of Engineering
    University of Engineering and Technology)

  • Amir Hussain

    (Edinburgh Napier University)

  • Hasan Abbas

    (University of Glasgow, James Watt School of Engineering)

  • Tie Jun Cui

    (Southeast University)

  • Muhammad Ali Imran

    (University of Glasgow, James Watt School of Engineering)

  • Qammer H. Abbasi

    (University of Glasgow, James Watt School of Engineering)

Abstract

The problem of Lip-reading has become an important research challenge in recent years. The goal is to recognise speech from lip movements. Most of the Lip-reading technologies developed so far are camera-based, which require video recording of the target. However, these technologies have well-known limitations of occlusion and ambient lighting with serious privacy concerns. Furthermore, vision-based technologies are not useful for multi-modal hearing aids in the coronavirus (COVID-19) environment, where face masks have become a norm. This paper aims to solve the fundamental limitations of camera-based systems by proposing a radio frequency (RF) based Lip-reading framework, having an ability to read lips under face masks. The framework employs Wi-Fi and radar technologies as enablers of RF sensing based Lip-reading. A dataset comprising of vowels A, E, I, O, U and empty (static/closed lips) is collected using both technologies, with a face mask. The collected data is used to train machine learning (ML) and deep learning (DL) models. A high classification accuracy of 95% is achieved on the Wi-Fi data utilising neural network (NN) models. Moreover, similar accuracy is achieved by VGG16 deep learning model on the collected radar-based dataset.

Suggested Citation

  • Hira Hameed & Muhammad Usman & Ahsen Tahir & Amir Hussain & Hasan Abbas & Tie Jun Cui & Muhammad Ali Imran & Qammer H. Abbasi, 2022. "Pushing the limits of remote RF sensing by reading lips under the face mask," Nature Communications, Nature, vol. 13(1), pages 1-9, December.
  • Handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-32231-1
    DOI: 10.1038/s41467-022-32231-1
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-022-32231-1
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-022-32231-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Yijia Lu & Han Tian & Jia Cheng & Fei Zhu & Bin Liu & Shanshan Wei & Linhong Ji & Zhong Lin Wang, 2022. "Decoding lip language using triboelectric sensors with deep learning," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Mahmoud Wagih & Junjie Shi & Menglong Li & Abiodun Komolafe & Thomas Whittaker & Johannes Schneider & Shanmugam Kumar & William Whittow & Steve Beeby, 2024. "Wide-range soft anisotropic thermistor with a direct wireless radio frequency interface," Nature Communications, Nature, vol. 15(1), pages 1-10, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhao, Lin-Chuan & Zhou, Teng & Chang, Si-Deng & Zou, Hong-Xiang & Gao, Qiu-Hua & Wu, Zhi-Yuan & Yan, Ge & Wei, Ke-Xiang & Yeatman, Eric M. & Meng, Guang & Zhang, Wen-Ming, 2024. "A disposable cup inspired smart floor for trajectory recognition and human-interactive sensing," Applied Energy, Elsevier, vol. 357(C).
    2. Jin Pyo Lee & Hanhyeok Jang & Yeonwoo Jang & Hyeonseo Song & Suwoo Lee & Pooi See Lee & Jiyun Kim, 2024. "Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface," Nature Communications, Nature, vol. 15(1), pages 1-13, December.
    3. Jiayue Zhang & Yikui Gao & Di Liu & Jing-Shan Zhao & Jie Wang, 2023. "Discharge domains regulation and dynamic processes of direct-current triboelectric nanogenerator," Nature Communications, Nature, vol. 14(1), pages 1-10, December.
    4. Hang Zhang & Sankaran Sundaresan & Michael A. Webb, 2024. "Thermodynamic driving forces in contact electrification between polymeric materials," Nature Communications, Nature, vol. 15(1), pages 1-9, December.
    5. Taemin Kim & Yejee Shin & Kyowon Kang & Kiho Kim & Gwanho Kim & Yunsu Byeon & Hwayeon Kim & Yuyan Gao & Jeong Ryong Lee & Geonhui Son & Taeseong Kim & Yohan Jun & Jihyun Kim & Jinyoung Lee & Seyun Um , 2022. "Ultrathin crystalline-silicon-based strain gauges with deep learning algorithms for silent speech interfaces," Nature Communications, Nature, vol. 13(1), pages 1-12, December.
    6. Yuxiang Shi & Peng Yang & Rui Lei & Zhaoqi Liu & Xuanyi Dong & Xinglin Tao & Xiangcheng Chu & Zhong Lin Wang & Xiangyu Chen, 2023. "Eye tracking and eye expression decoding based on transparent, flexible and ultra-persistent electrostatic interface," Nature Communications, Nature, vol. 14(1), pages 1-12, December.
    7. Sijia Xu & Jie-Xiang Yu & Hongshuang Guo & Shu Tian & You Long & Jing Yang & Lei Zhang, 2023. "Force-induced ion generation in zwitterionic hydrogels for a sensitive silent-speech sensor," Nature Communications, Nature, vol. 14(1), pages 1-11, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-32231-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.