Author
Listed:
- Wei Nai
- Zan Yang
- Daxuan Lin
- Dan Li
- Yidan Xing
- Niansheng Tang
Abstract
The transportation system of those countries has a huge traffic flow is bearing great pressure on transportation planning and management. Vehicle path planning is one of the effective ways to alleviate such pressure. Deep reinforcement learning (DRL), as a state-of-the-art solution method in vehicle path planning, can better balance the ability and complexity of the algorithm to reflect the real situation. However, DRL has its own disadvantages of higher search cost and earlier convergence to the local optimum, as vehicle path planning issues are usually in a complex environment, and their action set can be diverse. In this paper, a mixed policy gradient actor-critic (AC) model with random escape term and filter operation is proposed, in which the policy weight is both data driven and model driven. The empirical data-driven method is used to improve the poor asymptotic performance, and the model-driven method is used to ensure the convergence speed of the whole model. At the same time, in order to avoid the model converging local optimum, a random escape term has been added in the policy weight update process to overcome the problem that it is difficult to optimize the non-convex loss function, and the random escape term can help to explore the policy in more directions. In addition, filter optimization has been innovatively introduced in this paper, and the step size of each iteration of the model is selected through the filter optimization algorithm to achieve the better iterative effect. Numerical experiment results have shown that the model proposed in this paper can not only improve the accuracy of the solution without losing the accuracy but also speed up the convergence speed and improve the utilization of data.
Suggested Citation
Wei Nai & Zan Yang & Daxuan Lin & Dan Li & Yidan Xing & Niansheng Tang, 2022.
"A Vehicle Path Planning Algorithm Based on Mixed Policy Gradient Actor-Critic Model with Random Escape Term and Filter Optimization,"
Journal of Mathematics, Hindawi, vol. 2022, pages 1-17, August.
Handle:
RePEc:hin:jjmath:3679145
DOI: 10.1155/2022/3679145
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jjmath:3679145. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.