Author
Listed:
- Zhe Hu
(National Key Laboratory of Transient Physics, Nanjing University of Science and Technology, Nanjing 210094, China)
- Wenjun Yi
(National Key Laboratory of Transient Physics, Nanjing University of Science and Technology, Nanjing 210094, China)
- Liang Xiao
(School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China)
Abstract
This study presents an advanced second-order sliding-mode guidance law with a terminal impact angle constraint, which ingeniously combines reinforcement learning algorithms with the nonsingular terminal sliding-mode control (NTSM) theory. This hybrid approach effectively mitigates the inherent chattering issue commonly associated with sliding-mode control while maintaining high levels of control system precision. We introduce a parameter to the super-twisting algorithm and subsequently improve an intelligent parameter-adaptive algorithm grounded in the Twin-Delayed Deep Deterministic Policy Gradient (TD3) framework. During the guidance phase, a pre-trained reinforcement learning model is employed to directly map the missile’s state variables to the optimal adaptive parameters, thereby significantly enhancing the guidance performance. Additionally, a generalized super-twisting extended state observer (GSTESO) is introduced for estimating and compensating the lumped uncertainty within the missile guidance system. This method obviates the necessity for prior information about the target’s maneuvers, enabling the proposed guidance law to intercept maneuvering targets with unknown acceleration. The finite-time stability of the closed-loop guidance system is confirmed using the Lyapunov stability criterion. Simulations demonstrate that our proposed guidance law not only meets a wide range of impact angle constraints but also attains higher interception accuracy and faster convergence rate and better overall performance compared to traditional NTSM and the super-twisting NTSM (ST-NTSM) guidance laws, The interception accuracy is less than 0.1 m, and the impact angle error is less than 0.01°.
Suggested Citation
Zhe Hu & Wenjun Yi & Liang Xiao, 2025.
"Deep Reinforcement Learning-Based Impact Angle-Constrained Adaptive Guidance Law,"
Mathematics, MDPI, vol. 13(6), pages 1-26, March.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:6:p:987-:d:1614200
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:6:p:987-:d:1614200. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.