Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing
Author
Abstract
Suggested Citation
DOI: 10.1177/1550147719833541
Download full text from publisher
References listed on IDEAS
- Michael L. Littman, 2015. "Reinforcement learning improves behaviour from evaluative feedback," Nature, Nature, vol. 521(7553), pages 445-451, May.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Amjad Rehman & Tanzila Saba & Khalid Haseeb & Teg Alam & Jaime Lloret, 2022. "Sustainability Model for the Internet of Health Things (IoHT) Using Reinforcement Learning with Mobile Edge Secured Services," Sustainability, MDPI, vol. 14(19), pages 1-14, September.
- Arunita Chaukiyal, 2024. "Improving performance of WSNs in IoT applications by transmission power control and adaptive learning rates in reinforcement learning," Telecommunication Systems: Modelling, Analysis, Design and Management, Springer, vol. 87(3), pages 575-591, November.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
- Li, Yanbin & Wang, Jiani & Wang, Weiye & Liu, Chang & Li, Yun, 2023. "Dynamic pricing based electric vehicle charging station location strategy using reinforcement learning," Energy, Elsevier, vol. 281(C).
- Gohar Gholamibozanjani & Mohammed Farid, 2021. "A Critical Review on the Control Strategies Applied to PCM-Enhanced Buildings," Energies, MDPI, vol. 14(7), pages 1-39, March.
- Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
- Chuhan Wu & Fangzhao Wu & Tao Qi & Wei-Qiang Zhang & Xing Xie & Yongfeng Huang, 2022. "Removing AI’s sentiment manipulation of personalized news delivery," Palgrave Communications, Palgrave Macmillan, vol. 9(1), pages 1-9, December.
- Liang, Xuedong & Luo, Peng & Li, Xiaoyan & Wang, Xia & Shu, Lingli, 2023. "Crude oil price prediction using deep reinforcement learning," Resources Policy, Elsevier, vol. 81(C).
- Vijendra Kumar & Hazi Md. Azamathulla & Kul Vaibhav Sharma & Darshan J. Mehta & Kiran Tota Maharaj, 2023. "The State of the Art in Deep Learning Applications, Challenges, and Future Prospects: A Comprehensive Review of Flood Forecasting and Management," Sustainability, MDPI, vol. 15(13), pages 1-33, July.
More about this item
Keywords
Wireless sensor networks; network lifetime; reinforcement learning; routing protocol; reward function; energy efficiency;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:intdis:v:15:y:2019:i:2:p:1550147719833541. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.