Author
Listed:
- Ke-Lin Du
(School of Mechanical and Electrical Engineering, Guangdong University of Science and Technology, Dongguan 523668, China)
- Rengong Zhang
(Zhejiang Yugong Information Technology Co., Ltd., Changhe Road 475, Hangzhou 310002, China)
- Bingchun Jiang
(School of Mechanical and Electrical Engineering, Guangdong University of Science and Technology, Dongguan 523668, China)
- Jie Zeng
(Shenzhen Feng Xing Tai Bao Technology Co., Ltd., Shenzhen 518063, China)
- Jiabin Lu
(Faculty of Electromechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China)
Abstract
Machine learning has become indispensable across various domains, yet understanding its theoretical underpinnings remains challenging for many practitioners and researchers. Despite the availability of numerous resources, there is a need for a cohesive tutorial that integrates foundational principles with state-of-the-art theories. This paper addresses the fundamental concepts and theories of machine learning, with an emphasis on neural networks, serving as both a foundational exploration and a tutorial. It begins by introducing essential concepts in machine learning, including various learning and inference methods, followed by criterion functions, robust learning, discussions on learning and generalization, model selection, bias–variance trade-off, and the role of neural networks as universal approximators. Subsequently, the paper delves into computational learning theory, with probably approximately correct (PAC) learning theory forming its cornerstone. Key concepts such as the VC-dimension, Rademacher complexity, and empirical risk minimization principle are introduced as tools for establishing generalization error bounds in trained models. The fundamental theorem of learning theory establishes the relationship between PAC learnability, Vapnik–Chervonenkis (VC)-dimension, and the empirical risk minimization principle. Additionally, the paper discusses the no-free-lunch theorem, another pivotal result in computational learning theory. By laying a rigorous theoretical foundation, this paper provides a comprehensive tutorial for understanding the principles underpinning machine learning.
Suggested Citation
Ke-Lin Du & Rengong Zhang & Bingchun Jiang & Jie Zeng & Jiabin Lu, 2025.
"Understanding Machine Learning Principles: Learning, Inference, Generalization, and Computational Learning Theory,"
Mathematics, MDPI, vol. 13(3), pages 1-56, January.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:3:p:451-:d:1579575
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:3:p:451-:d:1579575. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.