Author
Listed:
- Lu-Qi Tao
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- He Tian
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University
University of Southern California)
- Ying Liu
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Zhen-Yi Ju
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Yu Pang
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Yuan-Quan Chen
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Dan-Yang Wang
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Xiang-Guang Tian
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Jun-Chao Yan
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Ning-Qin Deng
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Yi Yang
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
- Tian-Ling Ren
(Institute of Microelectronics and Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University)
Abstract
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.
Suggested Citation
Lu-Qi Tao & He Tian & Ying Liu & Zhen-Yi Ju & Yu Pang & Yuan-Quan Chen & Dan-Yang Wang & Xiang-Guang Tian & Jun-Chao Yan & Ning-Qin Deng & Yi Yang & Tian-Ling Ren, 2017.
"An intelligent artificial throat with sound-sensing ability based on laser induced graphene,"
Nature Communications, Nature, vol. 8(1), pages 1-8, April.
Handle:
RePEc:nat:natcom:v:8:y:2017:i:1:d:10.1038_ncomms14579
DOI: 10.1038/ncomms14579
Download full text from publisher
Citations
Citations are extracted by the
CitEc Project, subscribe to its
RSS feed for this item.
Cited by:
- Jung Bae Lee & Jina Jang & Haoyu Zhou & Yoonjae Lee & Jung Bin In, 2020.
"Densified Laser-Induced Graphene for Flexible Microsupercapacitors,"
Energies, MDPI, vol. 13(24), pages 1-9, December.
- Ziyuan Che & Xiao Wan & Jing Xu & Chrystal Duan & Tianqi Zheng & Jun Chen, 2024.
"Speaking without vocal folds using a machine-learning-assisted wearable sensing-actuation system,"
Nature Communications, Nature, vol. 15(1), pages 1-11, December.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:8:y:2017:i:1:d:10.1038_ncomms14579. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.