IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i7p245-d1432815.html
   My bibliography  Save this article

Optimizing Drone Energy Use for Emergency Communications in Disasters via Deep Reinforcement Learning

Author

Listed:
  • Wen Qiu

    (Information Processing Center, Kitami Institute of Technology, Kitami 090-8507, Japan)

  • Xun Shao

    (Department of Electrical and Electronic Information Engineering, Toyohashi University of Technology, Toyohashi 441-8580, Japan)

  • Hiroshi Masui

    (Information Processing Center, Kitami Institute of Technology, Kitami 090-8507, Japan)

  • William Liu

    (Department of Information Technology and Software Engineering, School of Engineering, Computer and Mathematical Sciences, Unitec Institute of Technology, Auckland 1025, New Zealand)

Abstract

For a communication control system in a disaster area where drones (also called unmanned aerial vehicles (UAVs)) are used as aerial base stations (ABSs), the reliability of communication is a key challenge for drones to provide emergency communication services. However, the effective configuration of UAVs remains a major challenge due to limitations in their communication range and energy capacity. In addition, the relatively high cost of drones and the issue of mutual communication interference make it impractical to deploy an unlimited number of drones in a given area. To maximize the communication services provided by a limited number of drones to the ground user equipment (UE) within a certain time frame while minimizing the drone energy consumption, we propose a multi-agent proximal policy optimization (MAPPO) algorithm. Considering the dynamic nature of the environment, we analyze diverse observation data structures and design novel objective functions to enhance the drone performance. We find that, when drone energy consumption is used as a penalty term in the objective function, the drones—acting as agents—can identify the optimal trajectory that maximizes the UE coverage while minimizing the energy consumption. At the same time, the experimental results reveal that, without considering the machine computing power required for training and convergence time, the proposed key algorithm demonstrates better performance in communication coverage and energy saving as compared with other methods. The average coverage performance is 10 – 45 % higher than that of the other three methods, and it can save up to 3 % more energy.

Suggested Citation

  • Wen Qiu & Xun Shao & Hiroshi Masui & William Liu, 2024. "Optimizing Drone Energy Use for Emergency Communications in Disasters via Deep Reinforcement Learning," Future Internet, MDPI, vol. 16(7), pages 1-18, July.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:7:p:245-:d:1432815
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/7/245/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/7/245/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Yiwei Na & Yulong Li & Danqiang Chen & Yongming Yao & Tianyu Li & Huiying Liu & Kuankuan Wang, 2023. "Optimal Energy Consumption Path Planning for Unmanned Aerial Vehicles Based on Improved Particle Swarm Optimization," Sustainability, MDPI, vol. 15(16), pages 1-16, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jue Wang & Bin Ji & Qian Fu, 2024. "Soft Actor-Critic and Risk Assessment-Based Reinforcement Learning Method for Ship Path Planning," Sustainability, MDPI, vol. 16(8), pages 1-16, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:7:p:245-:d:1432815. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.