IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v15y2023i11p369-d1281158.html
   My bibliography  Save this article

Maximizing UAV Coverage in Maritime Wireless Networks: A Multiagent Reinforcement Learning Approach

Author

Listed:
  • Qianqian Wu

    (Beijing Key Laboratory of Transportation Data Analysis and Mining, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China)

  • Qiang Liu

    (Beijing Key Laboratory of Transportation Data Analysis and Mining, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China)

  • Zefan Wu

    (Beijing Key Laboratory of Transportation Data Analysis and Mining, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China)

  • Jiye Zhang

    (School of Information Communication and Engineering, Communication University of China, Beijing 100024, China)

Abstract

In the field of ocean data monitoring, collaborative control and path planning of unmanned aerial vehicles (UAVs) are essential for improving data collection efficiency and quality. In this study, we focus on how to utilize multiple UAVs to efficiently cover the target area in ocean data monitoring tasks. First, we propose a multiagent deep reinforcement learning (DRL)-based path-planning method for multiple UAVs to perform efficient coverage tasks in a target area in the field of ocean data monitoring. Additionally, the traditional Multi-Agent Twin Delayed Deep Deterministic policy gradient (MATD3) algorithm only considers the current state of the agents, leading to poor performance in path planning. To address this issue, we introduce an improved MATD3 algorithm with the integration of a stacked long short-term memory (S-LSTM) network to incorporate the historical interaction information and environmental changes among agents. Finally, the experimental results demonstrate that the proposed MATD3-Stacked_LSTM algorithm can effectively improve the efficiency and practicality of UAV path planning by achieving a high coverage rate of the target area and reducing the redundant coverage rate among UAVs compared with two other advanced DRL algorithms.

Suggested Citation

  • Qianqian Wu & Qiang Liu & Zefan Wu & Jiye Zhang, 2023. "Maximizing UAV Coverage in Maritime Wireless Networks: A Multiagent Reinforcement Learning Approach," Future Internet, MDPI, vol. 15(11), pages 1-19, November.
  • Handle: RePEc:gam:jftint:v:15:y:2023:i:11:p:369-:d:1281158
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/15/11/369/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/15/11/369/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Siyu Gao & Yuchen Wang & Nan Feng & Zhongcheng Wei & Jijun Zhao, 2023. "Deep Reinforcement Learning-Based Video Offloading and Resource Allocation in NOMA-Enabled Networks," Future Internet, MDPI, vol. 15(5), pages 1-19, May.
    2. Zhiqiang Dai & Gaochao Xu & Ziqi Liu & Jiaqi Ge & Wei Wang, 2022. "Energy Saving Strategy of UAV in MEC Based on Deep Reinforcement Learning," Future Internet, MDPI, vol. 14(8), pages 1-19, July.
    3. Sang-Yoon Chang & Kyungmin Park & Jonghyun Kim & Jinoh Kim, 2023. "Securing UAV Flying Base Station for Mobile Networking: A Review," Future Internet, MDPI, vol. 15(5), pages 1-14, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yuchen Wang & Zishan Huang & Zhongcheng Wei & Jijun Zhao, 2024. "MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing," Future Internet, MDPI, vol. 16(6), pages 1-20, May.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:15:y:2023:i:11:p:369-:d:1281158. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.