Author
Listed:
- Peng Xiong
(Kaiserslautern Institute for Intelligent Manufacturing, Shanghai Dianji University, Shanghai 201308, China)
- Dan He
(School of Information Engineering, Nanchang Hangkong University, Nanchang 330063, China)
- Tiankun Lu
(Industrial Technology Center, Shanghai Dianji University, Shanghai 201308, China)
Abstract
To address the problems of unclear node activation strategy and redundant feasible solutions in solving the target coverage of wireless sensor networks, a target coverage algorithm based on deep Q-learning is proposed to learn the scheduling strategy of nodes for wireless sensor networks. First, the algorithm abstracts the construction of feasible solutions into a Markov decision process, and the smart body selects the activated sensor nodes as discrete actions according to the network environment. Second, the reward function evaluates the merit of the smart body’s choice of actions in terms of the coverage capacity of the activated nodes and their residual energy. The simulation results show that the proposed algorithm intelligences are able to stabilize their gains after 2500 rounds of learning and training under the specific designed states, actions and reward mechanisms, corresponding to the convergence of the proposed algorithm. It can also be seen that the proposed algorithm is effective under different network sizes, and its network lifetime outperforms the three greedy algorithms, the maximum lifetime coverage algorithm and the self-adaptive learning automata algorithm. Moreover, this advantage becomes more and more obvious with the increase in network size, node sensing radius and carrying initial energy.
Suggested Citation
Peng Xiong & Dan He & Tiankun Lu, 2025.
"A Q-Learning Based Target Coverage Algorithm for Wireless Sensor Networks,"
Mathematics, MDPI, vol. 13(3), pages 1-14, February.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:3:p:532-:d:1584336
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:3:p:532-:d:1584336. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.