Author
Listed:
- Yin Jia
(Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore
These authors contributed equally to this work.)
- Prabakaran Veerajagadheswar
(Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore
These authors contributed equally to this work.)
- Rajesh Elara Mohan
(Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore)
- Balakrishnan Ramalingam
(Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore)
- Zhenyuan Yang
(Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore)
Abstract
Floor-cleaning robots are becoming popular and operating in public places to ensure the places are clean and tidy. These robots are often operated in a dynamic environment that is less safe and has a high probability of ending up in accidents. Sound event-based context detection is expected to overcome drawbacks in a robot’s visual sensing to avoid a hazardous environment, especially in improper illumination and occlusion situations. Even though numerous studies in the literature discuss the benefits of sound-based context detection, there is no work reported related to context avoidance for cleaning robots. To this end, we propose a novel context avoidance framework based on a deep-learning method that can detect and classify a specific sound and localize the source from a robot’s frame to avoid that environment. The proposed model receives the spectrogram from the array of microphones as the input and produces two parallel outputs. The first output provides information about the spectrum class after running the classification task. The second output contains the localization message of the identified sound source. With the identity of the location that needs to be avoided, the proposed module will generate an alternative trajectory. The proposed model is evaluated in two real-world scenarios, wherein the model is trained to detect the escalator sound in the robot’s surroundings and avoid its location. In all considered scenarios, the developed system accomplished a significantly higher success rate in detecting and avoiding the escalator.
Suggested Citation
Yin Jia & Prabakaran Veerajagadheswar & Rajesh Elara Mohan & Balakrishnan Ramalingam & Zhenyuan Yang, 2023.
"Microphone-Based Context Awareness and Coverage Planner for a Service Robot Using Deep Learning Techniques,"
Mathematics, MDPI, vol. 11(8), pages 1-20, April.
Handle:
RePEc:gam:jmathe:v:11:y:2023:i:8:p:1766-:d:1118227
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:8:p:1766-:d:1118227. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.