Author
Listed:
- Longfei Yue
- Rennong Yang
- Ying Zhang
- Lixin Yu
- Zhuangzhuang Wang
- Wen-Long Shang
Abstract
Rapid and precise air operation mission planning is a key technology in unmanned aerial vehicles (UAVs) autonomous combat in battles. In this paper, an end-to-end UAV intelligent mission planning method based on deep reinforcement learning (DRL) is proposed to solve the shortcomings of the traditional intelligent optimization algorithm, such as relying on simple, static, low-dimensional scenarios, and poor scalability. Specifically, the suppression of enemy air defense (SEAD) mission planning is described as a sequential decision-making problem and formalized as a Markov decision process (MDP). Then, the SEAD intelligent planning model based on the proximal policy optimization (PPO) algorithm is established and a general intelligent planning architecture is proposed. Furthermore, three policy training tricks, i.e., domain randomization, maximizing policy entropy, and underlying network parameter sharing, are introduced to improve the learning performance and generalizability of PPO. Experiments results show that the model in this work is efficient and stable, and can be adapted to the unknown continuous high-dimensional environment. It can be concluded that the UAV intelligent mission planning model based on DRL has powerful intelligent planning performance, and provides a new idea for researching UAV autonomy.
Suggested Citation
Longfei Yue & Rennong Yang & Ying Zhang & Lixin Yu & Zhuangzhuang Wang & Wen-Long Shang, 2022.
"Deep Reinforcement Learning for UAV Intelligent Mission Planning,"
Complexity, Hindawi, vol. 2022, pages 1-13, March.
Handle:
RePEc:hin:complx:3551508
DOI: 10.1155/2022/3551508
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:complx:3551508. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.