IDEAS home Printed from https://ideas.repec.org/a/sae/intdis/v15y2019i2p1550147719833541.html
   My bibliography  Save this article

Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing

Author

Listed:
  • Wenjing Guo
  • Cairong Yan
  • Ting Lu

Abstract

In wireless sensor networks, optimizing the network lifetime is an important issue. Most of the existing works define network lifetime as the time when the first sensor node exhausts all of its energy. However, such time is not necessarily important. This is because when a sensor node dies, the whole network is likely to work properly. In this article, we first make an overall consideration of the demand of applications and define the network lifetime in three aspects. Then, we construct a performance evaluation framework for routing protocols. To achieve the optimization of network lifetime in all defined aspects, we propose a reinforcement-learning-based routing protocol. Reinforcement-learning-based routing protocol takes advantage of the intelligent algorithm of reinforcement learning to search for the optimal routing path for data transmission. In the definition of reward function, factors such as link distance, residual energy, and hop count to the sink are taken into account to cut down the total energy consumption, balance the energy consumption, and improve the packet delivery. Simulation results demonstrate that compared with energy-aware routing, BEER, Q-Routing, and MRL-SCSO, reinforcement-learning-based routing protocol optimizes the network lifetime in three aspects and improves the energy efficiency.

Suggested Citation

  • Wenjing Guo & Cairong Yan & Ting Lu, 2019. "Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing," International Journal of Distributed Sensor Networks, , vol. 15(2), pages 15501477198, February.
  • Handle: RePEc:sae:intdis:v:15:y:2019:i:2:p:1550147719833541
    DOI: 10.1177/1550147719833541
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/1550147719833541
    Download Restriction: no

    File URL: https://libkey.io/10.1177/1550147719833541?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Michael L. Littman, 2015. "Reinforcement learning improves behaviour from evaluative feedback," Nature, Nature, vol. 521(7553), pages 445-451, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Amjad Rehman & Tanzila Saba & Khalid Haseeb & Teg Alam & Jaime Lloret, 2022. "Sustainability Model for the Internet of Health Things (IoHT) Using Reinforcement Learning with Mobile Edge Secured Services," Sustainability, MDPI, vol. 14(19), pages 1-14, September.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    2. Li, Yanbin & Wang, Jiani & Wang, Weiye & Liu, Chang & Li, Yun, 2023. "Dynamic pricing based electric vehicle charging station location strategy using reinforcement learning," Energy, Elsevier, vol. 281(C).
    3. Gohar Gholamibozanjani & Mohammed Farid, 2021. "A Critical Review on the Control Strategies Applied to PCM-Enhanced Buildings," Energies, MDPI, vol. 14(7), pages 1-39, March.
    4. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    5. Vijendra Kumar & Hazi Md. Azamathulla & Kul Vaibhav Sharma & Darshan J. Mehta & Kiran Tota Maharaj, 2023. "The State of the Art in Deep Learning Applications, Challenges, and Future Prospects: A Comprehensive Review of Flood Forecasting and Management," Sustainability, MDPI, vol. 15(13), pages 1-33, July.
    6. Chuhan Wu & Fangzhao Wu & Tao Qi & Wei-Qiang Zhang & Xing Xie & Yongfeng Huang, 2022. "Removing AI’s sentiment manipulation of personalized news delivery," Palgrave Communications, Palgrave Macmillan, vol. 9(1), pages 1-9, December.
    7. Liang, Xuedong & Luo, Peng & Li, Xiaoyan & Wang, Xia & Shu, Lingli, 2023. "Crude oil price prediction using deep reinforcement learning," Resources Policy, Elsevier, vol. 81(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:intdis:v:15:y:2019:i:2:p:1550147719833541. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.