Author
Abstract
As one of the hotspots in music information extraction research, music recognition has received extensive attention from scholars in recent years. Most of the current research methods are based on traditional signal processing methods, and there is still a lot of room for improvement in recognition accuracy and recognition efficiency. There are few research studies on music recognition based on deep neural networks. This paper expounds on the basic principles of deep learning and the basic structure and training methods of neural networks. For two kinds of commonly used deep networks, convolutional neural network and recurrent neural network, their typical structures, training methods, advantages, and disadvantages are analyzed. At the same time, a variety of platforms and tools for training deep neural networks are introduced, and their advantages and disadvantages are compared. TensorFlow and Keras frameworks are selected from them, and the practice related to neural network research is carried out. Training lays the foundation. Results show that through the development and experimental demonstration of the prototype system, as well as the comparison with other researchers in the field of humming recognition, it is proved that the deep-learning method can be applied to the humming recognition problem, which can effectively improve the accuracy of humming recognition and improve the recognition time. A convolutional recurrent neural network is designed and implemented, combining the local feature extraction of the convolutional layer and the ability of the recurrent layer to summarize the sequence features, to learn the features of the humming signal, so as to obtain audio features with a higher degree of abstraction and complexity and improve the performance of the humming signal. The ability of neural networks to learn the features of audio signals lays the foundation for an efficient and accurate humming recognition process.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jnlmpe:1002105. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.