IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i15p2817-d883297.html
   My bibliography  Save this article

Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation

Author

Listed:
  • Bolin Liao

    (College of Computer Science and Engineering, Jishou University, Jishou 416000, China)

  • Cheng Hua

    (College of Computer Science and Engineering, Jishou University, Jishou 416000, China)

  • Xinwei Cao

    (School of Business, Jiangnan University, Wuxi 214122, China)

  • Vasilios N. Katsikis

    (Department of Economics, Division of Mathematics and Informatics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece)

  • Shuai Li

    (School of Engineering, Swansea University, Swansea SA2 8PP, UK)

Abstract

Complex time-dependent Lyapunov equation (CTDLE), as an important means of stability analysis of control systems, has been extensively employed in mathematics and engineering application fields. Recursive neural networks (RNNs) have been reported as an effective method for solving CTDLE. In the previous work, zeroing neural networks (ZNNs) have been established to find the accurate solution of time-dependent Lyapunov equation (TDLE) in the noise-free conditions. However, noises are inevitable in the actual implementation process. In order to suppress the interference of various noises in practical applications, in this paper, a complex noise-resistant ZNN (CNRZNN) model is proposed and employed for the CTDLE solution. Additionally, the convergence and robustness of the CNRZNN model are analyzed and proved theoretically. For verification and comparison, three experiments and the existing noise-tolerant ZNN (NTZNN) model are introduced to investigate the effectiveness, convergence and robustness of the CNRZNN model. Compared with the NTZNN model, the CNRZNN model has more generality and stronger robustness. Specifically, the NTZNN model is a special form of the CNRZNN model, and the residual error of CNRZNN can converge rapidly and stably to order 10 − 5 when solving CTDLE under complex linear noises, which is much lower than order 10 − 1 of the NTZNN model. Analogously, under complex quadratic noises, the residual error of the CNRZNN model can converge to 2 ∥ A ∥ F / ζ 3 quickly and stably, while the residual error of the NTZNN model is divergent.

Suggested Citation

  • Bolin Liao & Cheng Hua & Xinwei Cao & Vasilios N. Katsikis & Shuai Li, 2022. "Complex Noise-Resistant Zeroing Neural Network for Computing Complex Time-Dependent Lyapunov Equation," Mathematics, MDPI, vol. 10(15), pages 1-17, August.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:15:p:2817-:d:883297
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/15/2817/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/15/2817/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Yongjun He & Bolin Liao & Lin Xiao & Luyang Han & Xiao Xiao, 2021. "Double Accelerated Convergence ZNN with Noise-Suppression for Handling Dynamic Matrix Inversion," Mathematics, MDPI, vol. 10(1), pages 1-21, December.
    2. Wendong Jiang & Chia-Liang Lin & Vasilios N. Katsikis & Spyridon D. Mourtas & Predrag S. Stanimirović & Theodore E. Simos, 2022. "Zeroing Neural Network Approaches Based on Direct and Indirect Methods for Solving the Yang–Baxter-like Matrix Equation," Mathematics, MDPI, vol. 10(11), pages 1-13, June.
    3. Yi, Chenfu & Zhang, Yunong & Guo, Dongsheng, 2013. "A new type of recurrent neural networks for real-time solution of Lyapunov equation with time-varying coefficient matrices," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 92(C), pages 40-52.
    4. Guo, Dongsheng & Zhang, Yunong, 2015. "ZNN for solving online time-varying linear matrix–vector inequality via equality conversion," Applied Mathematics and Computation, Elsevier, vol. 259(C), pages 327-338.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Vladislav N. Kovalnogov & Ruslan V. Fedorov & Dmitry A. Generalov & Andrey V. Chukalin & Vasilios N. Katsikis & Spyridon D. Mourtas & Theodore E. Simos, 2022. "Portfolio Insurance through Error-Correction Neural Networks," Mathematics, MDPI, vol. 10(18), pages 1-14, September.
    2. Qi, Zhaohui & Ning, Yingqiang & Xiao, Lin & Luo, Jiajie & Li, Xiaopeng, 2023. "Finite-time zeroing neural networks with novel activation function and variable parameter for solving time-varying Lyapunov tensor equation," Applied Mathematics and Computation, Elsevier, vol. 452(C).
    3. Spyridon D. Mourtas & Chrysostomos Kasimis, 2022. "Exploiting Mean-Variance Portfolio Optimization Problems through Zeroing Neural Networks," Mathematics, MDPI, vol. 10(17), pages 1-20, August.
    4. Hosseinipour-Mahani, N. & Malek, A., 2016. "A neurodynamic optimization technique based on overestimator and underestimator functions for solving a class of non-convex optimization problems," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 122(C), pages 20-34.
    5. Zhang, Yunong & Zhai, Keke & Chen, Dechao & Jin, Long & Hu, Chaowei, 2016. "Challenging simulation practice (failure and success) on implicit tracking control of double-integrator system via Zhang-gradient method," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 120(C), pages 104-119.
    6. Rabeh Abbassi & Houssem Jerbi & Mourad Kchaou & Theodore E. Simos & Spyridon D. Mourtas & Vasilios N. Katsikis, 2023. "Towards Higher-Order Zeroing Neural Networks for Calculating Quaternion Matrix Inverse with Application to Robotic Motion Tracking," Mathematics, MDPI, vol. 11(12), pages 1-21, June.
    7. Zhu, Jingcan & Jin, Jie & Chen, Weijie & Gong, Jianqiang, 2022. "A combined power activation function based convergent factor-variable ZNN model for solving dynamic matrix inversion," Mathematics and Computers in Simulation (MATCOM), Elsevier, vol. 197(C), pages 291-307.
    8. Stanimirović, Predrag S. & Mourtas, Spyridon D. & Mosić, Dijana & Katsikis, Vasilios N. & Cao, Xinwei & Li, Shuai, 2024. "Zeroing neural network approaches for computing time-varying minimal rank outer inverse," Applied Mathematics and Computation, Elsevier, vol. 465(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:15:p:2817-:d:883297. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.