Author
Listed:
- Junjie Bai
(School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China & School of Instrument Science and Engineering, Southeast University, Nanjing, China)
- Lixiao Feng
(School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China)
- Jun Peng
(School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China)
- Jinliang Shi
(School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China)
- Kan Luo
(School of Information Science and Engineering, Fujian University of Technology, Fuzhou, China)
- Zuojin Li
(School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China)
- Lu Liao
(School of Electrical and Information Engineering, Chongqing University of Science and Technology, Chongqing, China)
- Yingxu Wang
(International Institute of Cognitive Informatics and Cognitive Computing (ICIC),Laboratory for Computational Intelligence, Denotational Mathematics, and Software Science, Department of Electrical and Computer Engineering, Schulich School of Engineering and Hotchkiss Brain Institute, University of Calgary, Calgary, Canada & Information Systems Lab, Stanford University, Stanford, CA, USA)
Abstract
Music emotion recognition (MER) is a challenging field of studies that has been addressed in multiple disciplines such as cognitive science, physiology, psychology, musicology, and arts. In this paper, music emotions are modeled as a set of continuous variables composed of valence and arousal (VA) values based on the Valence-Arousal model. MER is formulated as a regression problem where 548 dimensions of music features were extracted and selected. A wide range of methods including multivariate adaptive regression spline, support vector regression (SVR), radial basis function, random forest regression (RFR), and regression neural networks are adopted to recognize music emotions. Experimental results show that these regression algorithms have led to good regression effect for MER. The optimal R2 statistics and VA values are 29.3% and 62.5%, respectively, which are obtained by the RFR and SVR algorithms in the relief feature space.
Suggested Citation
Junjie Bai & Lixiao Feng & Jun Peng & Jinliang Shi & Kan Luo & Zuojin Li & Lu Liao & Yingxu Wang, 2016.
"Dimensional Music Emotion Recognition by Machine Learning,"
International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), IGI Global, vol. 10(4), pages 74-89, October.
Handle:
RePEc:igg:jcini0:v:10:y:2016:i:4:p:74-89
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:igg:jcini0:v:10:y:2016:i:4:p:74-89. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Journal Editor (email available below). General contact details of provider: https://www.igi-global.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.