IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i3p451-d1579575.html
   My bibliography  Save this article

Understanding Machine Learning Principles: Learning, Inference, Generalization, and Computational Learning Theory

Author

Listed:
  • Ke-Lin Du

    (School of Mechanical and Electrical Engineering, Guangdong University of Science and Technology, Dongguan 523668, China)

  • Rengong Zhang

    (Zhejiang Yugong Information Technology Co., Ltd., Changhe Road 475, Hangzhou 310002, China)

  • Bingchun Jiang

    (School of Mechanical and Electrical Engineering, Guangdong University of Science and Technology, Dongguan 523668, China)

  • Jie Zeng

    (Shenzhen Feng Xing Tai Bao Technology Co., Ltd., Shenzhen 518063, China)

  • Jiabin Lu

    (Faculty of Electromechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China)

Abstract

Machine learning has become indispensable across various domains, yet understanding its theoretical underpinnings remains challenging for many practitioners and researchers. Despite the availability of numerous resources, there is a need for a cohesive tutorial that integrates foundational principles with state-of-the-art theories. This paper addresses the fundamental concepts and theories of machine learning, with an emphasis on neural networks, serving as both a foundational exploration and a tutorial. It begins by introducing essential concepts in machine learning, including various learning and inference methods, followed by criterion functions, robust learning, discussions on learning and generalization, model selection, bias–variance trade-off, and the role of neural networks as universal approximators. Subsequently, the paper delves into computational learning theory, with probably approximately correct (PAC) learning theory forming its cornerstone. Key concepts such as the VC-dimension, Rademacher complexity, and empirical risk minimization principle are introduced as tools for establishing generalization error bounds in trained models. The fundamental theorem of learning theory establishes the relationship between PAC learnability, Vapnik–Chervonenkis (VC)-dimension, and the empirical risk minimization principle. Additionally, the paper discusses the no-free-lunch theorem, another pivotal result in computational learning theory. By laying a rigorous theoretical foundation, this paper provides a comprehensive tutorial for understanding the principles underpinning machine learning.

Suggested Citation

  • Ke-Lin Du & Rengong Zhang & Bingchun Jiang & Jie Zeng & Jiabin Lu, 2025. "Understanding Machine Learning Principles: Learning, Inference, Generalization, and Computational Learning Theory," Mathematics, MDPI, vol. 13(3), pages 1-56, January.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:3:p:451-:d:1579575
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/3/451/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/3/451/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ke-Lin Du & M. N. S. Swamy & Zhang-Quan Wang & Wai Ho Mow, 2023. "Matrix Factorization Techniques in Machine Learning, Signal Processing, and Statistics," Mathematics, MDPI, vol. 11(12), pages 1-50, June.
    2. Ye Tian & Yang Feng, 2023. "Transfer Learning Under High-Dimensional Generalized Linear Models," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 118(544), pages 2684-2697, October.
    3. Hamsa Bastani, 2021. "Predicting with Proxies: Transfer Learning in High Dimension," Management Science, INFORMS, vol. 67(5), pages 2964-2984, May.
    4. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    5. Barry L. Nelson, 1990. "Control Variate Remedies," Operations Research, INFORMS, vol. 38(6), pages 974-992, December.
    6. Sai Li & T. Tony Cai & Hongzhe Li, 2023. "Transfer Learning in Large-Scale Gaussian Graphical Models with False Discovery Rate Control," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 118(543), pages 2171-2183, July.
    7. Portier, Francois & Segers, Johan, 2018. "Monte Carlo integration with a growing number of control variates," LIDAM Discussion Papers ISBA 2018001, Université catholique de Louvain, Institute of Statistics, Biostatistics and Actuarial Sciences (ISBA).
    8. Hirotugu Akaike, 1969. "Fitting autoregressive models for prediction," Annals of the Institute of Statistical Mathematics, Springer;The Institute of Statistical Mathematics, vol. 21(1), pages 243-247, December.
    9. Ke-Lin Du & Bingchun Jiang & Jiabin Lu & Jingyu Hua & M. N. S. Swamy, 2024. "Exploring Kernel Machines and Support Vector Machines: Principles, Techniques, and Future Directions," Mathematics, MDPI, vol. 12(24), pages 1-58, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xiang, Pengcheng & Zhou, Ling & Tang, Lu, 2024. "Transfer learning via random forests: A one-shot federated approach," Computational Statistics & Data Analysis, Elsevier, vol. 197(C).
    2. Hao Zeng & Wei Zhong & Xingbai Xu, 2024. "Transfer Learning for Spatial Autoregressive Models with Application to U.S. Presidential Election Prediction," Papers 2405.15600, arXiv.org, revised Sep 2024.
    3. Ziyuan Wang & Lei Wang & Heng Lian, 2024. "Double debiased transfer learning for adaptive Huber regression," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 51(4), pages 1472-1505, December.
    4. Ke-Lin Du & Rengong Zhang & Bingchun Jiang & Jie Zeng & Jiabin Lu, 2025. "Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus," Mathematics, MDPI, vol. 13(4), pages 1-49, February.
    5. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    6. Kathryn M. Dominguez, 1991. "Do Exchange Auctions Work? An Examination of the Bolivian Experience," NBER Working Papers 3683, National Bureau of Economic Research, Inc.
    7. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    8. Lixiang Zhang & Yan Yan & Yaoguang Hu, 2024. "Deep reinforcement learning for dynamic scheduling of energy-efficient automated guided vehicles," Journal of Intelligent Manufacturing, Springer, vol. 35(8), pages 3875-3888, December.
    9. Jacint Balaguer & Manuel Cantavella-Jorda, 2004. "Structural change in exports and economic growth: cointegration and causality analysis for Spain (1961-2000)," Applied Economics, Taylor & Francis Journals, vol. 36(5), pages 473-477.
    10. Muhammad Farooq Arby & Amjad Ali, 2017. "Threshold Inflation in Pakistan," SBP Research Bulletin, State Bank of Pakistan, Research Department, vol. 13, pages 1-19.
    11. Ramona Dumitriu & Razvan Stefanescu, 2015. "The Relationship Between Romanian Exports And Economic Growth After The Adhesion To European Union," Risk in Contemporary Economy, "Dunarea de Jos" University of Galati, Faculty of Economics and Business Administration, pages 17-26.
    12. David F. Hendry & Hans-Martin Krolzig, 2005. "The Properties of Automatic "GETS" Modelling," Economic Journal, Royal Economic Society, vol. 115(502), pages C32-C61, 03.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:3:p:451-:d:1579575. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.