IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i3p778-d1056869.html
   My bibliography  Save this article

Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources

Author

Listed:
  • Ali Najem Alkawaz

    (Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia)

  • Jeevan Kanesan

    (Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia)

  • Anis Salwa Mohd Khairuddin

    (Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia)

  • Irfan Anjum Badruddin

    (Mechanical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia)

  • Sarfaraz Kamangar

    (Mechanical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia)

  • Mohamed Hussien

    (Department of Chemistry, Faculty of Science, King Khalid University, P.O. Box 9004, Abha 61413, Saudi Arabia
    Pesticide Formulation Department, Central Agricultural Pesticide Laboratory, Agricultural Research Center, Dokki, Giza 12618, Egypt)

  • Maughal Ahmed Ali Baig

    (Department of Mechanical Engineering, CMR Technical Campus, Kandlakoya, Medchal Road, Hyderabad 501401, India)

  • N. Ameer Ahammad

    (Department of Mathematics, Faculty of Science, University of Tabuk, Tabuk 71491, Saudi Arabia)

Abstract

Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a multilayer perceptron. However, BP is inherently slow in learning, and it sometimes traps at local minima, mainly due to a constant learning rate. This pre-fixed learning rate regularly leads the BP network towards an unsuccessful stochastic steepest descent. Therefore, to overcome the limitation of BP, this work addresses an improved method of training the neural network based on optimal control (OC) theory. State equations in optimal control represent the BP neural network’s weights and biases. Meanwhile, the learning rate is treated as the input control that adapts during the neural training process. The effectiveness of the proposed algorithm is evaluated on several logic gates models such as XOR, AND, and OR, as well as the full adder model. Simulation results demonstrate that the proposed algorithm outperforms the conventional method in terms of improved accuracy in output with a shorter time in training. The training via OC also reduces the local minima trap. The proposed algorithm is almost 40% faster than the steepest descent method, with a marginally improved accuracy of approximately 60%. Consequently, the proposed algorithm is suitable to be applied on devices with limited computation resources, since the proposed algorithm is less complex, thus lowering the circuit’s power consumption.

Suggested Citation

  • Ali Najem Alkawaz & Jeevan Kanesan & Anis Salwa Mohd Khairuddin & Irfan Anjum Badruddin & Sarfaraz Kamangar & Mohamed Hussien & Maughal Ahmed Ali Baig & N. Ameer Ahammad, 2023. "Training Multilayer Neural Network Based on Optimal Control Theory for Limited Computational Resources," Mathematics, MDPI, vol. 11(3), pages 1-15, February.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:3:p:778-:d:1056869
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/3/778/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/3/778/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Tariq Mahmood & Nasir Ali & Naveed Ishtiaq Chaudhary & Khalid Mehmood Cheema & Ahmad H. Milyani & Muhammad Asif Zahoor Raja, 2022. "Novel Adaptive Bayesian Regularization Networks for Peristaltic Motion of a Third-Grade Fluid in a Planar Channel," Mathematics, MDPI, vol. 10(3), pages 1-23, January.
    2. Daizheng Huang & Zhihui Wu, 2017. "Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization," PLOS ONE, Public Library of Science, vol. 12(2), pages 1-17, February.
    3. Ebubekir Kaya, 2022. "A New Neural Network Training Algorithm Based on Artificial Bee Colony Algorithm for Nonlinear System Identification," Mathematics, MDPI, vol. 10(19), pages 1-27, September.
    4. Ke-Lin Du & Chi-Sing Leung & Wai Ho Mow & M. N. S. Swamy, 2022. "Perceptron: Learning, Generalization, Model Selection, Fault Tolerance, and Role in the Deep Learning Era," Mathematics, MDPI, vol. 10(24), pages 1-46, December.
    5. P. Tseng & S. Yun, 2009. "Block-Coordinate Gradient Descent Method for Linearly Constrained Nonsmooth Separable Optimization," Journal of Optimization Theory and Applications, Springer, vol. 140(3), pages 513-535, March.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yun Tan & Changshu Zhan & Youchun Pi & Chunhui Zhang & Jinghui Song & Yan Chen & Amir-Mohammad Golmohammadi, 2023. "A Hybrid Algorithm Based on Social Engineering and Artificial Neural Network for Fault Warning Detection in Hydraulic Turbines," Mathematics, MDPI, vol. 11(10), pages 1-18, May.
    2. Chaymae Rajafillah & Karim El Moutaouakil & Alina-Mihaela Patriciu & Ali Yahyaouy & Jamal Riffi, 2024. "INT-FUP: Intuitionistic Fuzzy Pooling," Mathematics, MDPI, vol. 12(11), pages 1-22, June.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Min Tao & Jiang-Ning Li, 2023. "Error Bound and Isocost Imply Linear Convergence of DCA-Based Algorithms to D-Stationarity," Journal of Optimization Theory and Applications, Springer, vol. 197(1), pages 205-232, April.
    2. Sherif A. Zaid & Ahmed M. Kassem & Aadel M. Alatwi & Hani Albalawi & Hossam AbdelMeguid & Atef Elemary, 2023. "Optimal Control of an Autonomous Microgrid Integrated with Super Magnetic Energy Storage Using an Artificial Bee Colony Algorithm," Sustainability, MDPI, vol. 15(11), pages 1-19, May.
    3. Le Thi Khanh Hien & Duy Nhat Phan & Nicolas Gillis, 2022. "Inertial alternating direction method of multipliers for non-convex non-smooth optimization," Computational Optimization and Applications, Springer, vol. 83(1), pages 247-285, September.
    4. Marjan Golob, 2023. "NARX Deep Convolutional Fuzzy System for Modelling Nonlinear Dynamic Processes," Mathematics, MDPI, vol. 11(2), pages 1-22, January.
    5. Dewei Zhang & Yin Liu & Sam Davanloo Tajbakhsh, 2022. "A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure," INFORMS Journal on Computing, INFORMS, vol. 34(2), pages 1126-1140, March.
    6. Liu, Yulan & Bi, Shujun, 2019. "Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM," Applied Mathematics and Computation, Elsevier, vol. 358(C), pages 418-435.
    7. Ion Necoara & Andrei Patrascu, 2014. "A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints," Computational Optimization and Applications, Springer, vol. 57(2), pages 307-337, March.
    8. S. Bonettini & M. Prato & S. Rebegoldi, 2018. "A block coordinate variable metric linesearch based proximal gradient method," Computational Optimization and Applications, Springer, vol. 71(1), pages 5-52, September.
    9. Sjur Didrik Flåm, 2019. "Blocks of coordinates, stochastic programming, and markets," Computational Management Science, Springer, vol. 16(1), pages 3-16, February.
    10. Pei Wang & Shunjie Chen & Sijia Yang, 2022. "Recent Advances on Penalized Regression Models for Biological Data," Mathematics, MDPI, vol. 10(19), pages 1-24, October.
    11. Masoud Ahookhosh & Le Thi Khanh Hien & Nicolas Gillis & Panagiotis Patrinos, 2021. "A Block Inertial Bregman Proximal Algorithm for Nonsmooth Nonconvex Problems with Application to Symmetric Nonnegative Matrix Tri-Factorization," Journal of Optimization Theory and Applications, Springer, vol. 190(1), pages 234-258, July.
    12. Ion Necoara & Yurii Nesterov & François Glineur, 2017. "Random Block Coordinate Descent Methods for Linearly Constrained Optimization over Networks," Journal of Optimization Theory and Applications, Springer, vol. 173(1), pages 227-254, April.
    13. Ching-pei Lee & Stephen J. Wright, 2019. "Inexact Successive quadratic approximation for regularized optimization," Computational Optimization and Applications, Springer, vol. 72(3), pages 641-674, April.
    14. Luoying Yang & Tong Tong Wu, 2023. "Model‐based clustering of high‐dimensional longitudinal data via regularization," Biometrics, The International Biometric Society, vol. 79(2), pages 761-774, June.
    15. Andrei Patrascu & Ion Necoara, 2015. "Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization," Journal of Global Optimization, Springer, vol. 61(1), pages 19-46, January.
    16. Kimon Fountoulakis & Rachael Tappenden, 2018. "A flexible coordinate descent method," Computational Optimization and Applications, Springer, vol. 70(2), pages 351-394, June.
    17. Yuqia Wu & Shaohua Pan & Shujun Bi, 2021. "Kurdyka–Łojasiewicz Property of Zero-Norm Composite Functions," Journal of Optimization Theory and Applications, Springer, vol. 188(1), pages 94-112, January.
    18. Christian Kanzow & Theresa Lechner, 2021. "Globalized inexact proximal Newton-type methods for nonconvex composite functions," Computational Optimization and Applications, Springer, vol. 78(2), pages 377-410, March.
    19. Jin Zhang & Xide Zhu, 2022. "Linear Convergence of Prox-SVRG Method for Separable Non-smooth Convex Optimization Problems under Bounded Metric Subregularity," Journal of Optimization Theory and Applications, Springer, vol. 192(2), pages 564-597, February.
    20. Shuqin Sun & Ting Kei Pong, 2023. "Doubly iteratively reweighted algorithm for constrained compressed sensing models," Computational Optimization and Applications, Springer, vol. 85(2), pages 583-619, June.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:3:p:778-:d:1056869. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.