Author
Listed:
- Dejun Chen
(College of Systems Engineering, National University of Defense Technology, Changsha 410073, China)
- Yunxiu Zeng
(College of Systems Engineering, National University of Defense Technology, Changsha 410073, China)
- Yi Zhang
(College of Systems Engineering, National University of Defense Technology, Changsha 410073, China)
- Shuilin Li
(College of Systems Engineering, National University of Defense Technology, Changsha 410073, China)
- Kai Xu
(College of Systems Engineering, National University of Defense Technology, Changsha 410073, China)
- Quanjun Yin
(College of Systems Engineering, National University of Defense Technology, Changsha 410073, China)
Abstract
Deceptive path planning (DPP) aims to find a path that minimizes the probability of the observer identifying the real goal of the observed before it reaches. It is important for addressing issues such as public safety, strategic path planning, and logistics route privacy protection. Existing traditional methods often rely on “dissimulation”—hiding the truth—to obscure paths while ignoring the time constraints. Building upon the theory of probabilistic goal recognition based on cost difference, we proposed a DPP method, DPP_Q, based on count-based Q-learning for solving the DPP problems in discrete path-planning domains under specific time constraints. Furthermore, to extend this method to continuous domains, we proposed a new model of probabilistic goal recognition called the Approximate Goal Recognition Model (AGRM) and verified its feasibility in discrete path-planning domains. Finally, we also proposed a DPP method based on proximal policy optimization for continuous path-planning domains under specific time constraints called DPP_PPO. DPP methods like DPP_Q and DPP_PPO are types of research that have not yet been explored in the field of path planning. Experimental results show that, in discrete domains, compared to traditional methods, DPP_Q exhibits better effectiveness in enhancing the average deceptiveness of paths. (Improved on average by 12.53% compared to traditional methods). In continuous domains, DPP_PPO shows significant advantages over random walk methods. Both DPP_Q and DPP_PPO demonstrate good applicability in path-planning domains with uncomplicated obstacles.
Suggested Citation
Dejun Chen & Yunxiu Zeng & Yi Zhang & Shuilin Li & Kai Xu & Quanjun Yin, 2024.
"Deceptive Path Planning via Count-Based Reinforcement Learning under Specific Time Constraint,"
Mathematics, MDPI, vol. 12(13), pages 1-21, June.
Handle:
RePEc:gam:jmathe:v:12:y:2024:i:13:p:1979-:d:1422955
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:12:y:2024:i:13:p:1979-:d:1422955. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.