Improving the Performance of Autonomous Driving through Deep Reinforcement Learning
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Abhijit Gosavi, 2009. "Reinforcement Learning: A Tutorial Survey and Recent Advances," INFORMS Journal on Computing, INFORMS, vol. 21(2), pages 178-192, May.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Voelkel, Michael A. & Sachs, Anna-Lena & Thonemann, Ulrich W., 2020. "An aggregation-based approximate dynamic programming approach for the periodic review model with random yield," European Journal of Operational Research, Elsevier, vol. 281(2), pages 286-298.
- Fang, Jianhao & Hu, Weifei & Liu, Zhenyu & Chen, Weiyi & Tan, Jianrong & Jiang, Zhiyu & Verma, Amrit Shankar, 2022. "Wind turbine rotor speed design optimization considering rain erosion based on deep reinforcement learning," Renewable and Sustainable Energy Reviews, Elsevier, vol. 168(C).
- Dieter Hendricks & Diane Wilcox, 2014. "A reinforcement learning extension to the Almgren-Chriss model for optimal trade execution," Papers 1403.2229, arXiv.org.
- Wang, Xianjia & Yang, Zhipeng & Liu, Yanli & Chen, Guici, 2023. "A reinforcement learning-based strategy updating model for the cooperative evolution," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 618(C).
- Bunn, Derek W. & Oliveira, Fernando S., 2016. "Dynamic capacity planning using strategic slack valuation," European Journal of Operational Research, Elsevier, vol. 253(1), pages 40-50.
- Andreas Rauh & Marit Lahme & Oussama Benzinane, 2022. "A Comparison of the Use of Pontryagin’s Maximum Principle and Reinforcement Learning Techniques for the Optimal Charging of Lithium-Ion Batteries," Clean Technol., MDPI, vol. 4(4), pages 1-21, December.
- Puwei Lu & Wenkai Huang & Junlong Xiao & Fobao Zhou & Wei Hu, 2021. "Adaptive Proportional Integral Robust Control of an Uncertain Robotic Manipulator Based on Deep Deterministic Policy Gradient," Mathematics, MDPI, vol. 9(17), pages 1-16, August.
- Jia, Liangyue & Hao, Jia & Hall, John & Nejadkhaki, Hamid Khakpour & Wang, Guoxin & Yan, Yan & Sun, Mengyuan, 2021. "A reinforcement learning based blade twist angle distribution searching method for optimizing wind turbine energy power," Energy, Elsevier, vol. 215(PA).
- Zhang, Xiaoshun & Chen, Yixuan & Yu, Tao & Yang, Bo & Qu, Kaiping & Mao, Senmao, 2017. "Equilibrium-inspired multiagent optimizer with extreme transfer learning for decentralized optimal carbon-energy combined-flow of large-scale power systems," Applied Energy, Elsevier, vol. 189(C), pages 157-176.
- Justin Dumouchelle & Emma Frejinger & Andrea Lodi, 2024. "Reinforcement learning for freight booking control problems," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 23(4), pages 318-345, August.
- M. Saqlain & S. Ali & J. Y. Lee, 2023. "A Monte-Carlo tree search algorithm for the flexible job-shop scheduling in manufacturing systems," Flexible Services and Manufacturing Journal, Springer, vol. 35(2), pages 548-571, June.
- Li, Munan & Wang, Wenshu & Zhou, Keyu, 2021. "Exploring the technology emergence related to artificial intelligence: A perspective of coupling analyses," Technological Forecasting and Social Change, Elsevier, vol. 172(C).
- Hoai An Le Thi & Vinh Thanh Ho & Tao Pham Dinh, 2019. "A unified DC programming framework and efficient DCA based approaches for large scale batch reinforcement learning," Journal of Global Optimization, Springer, vol. 73(2), pages 279-310, February.
- Lee, Junhyeok & Shin, Youngchul & Moon, Ilkyeong, 2024. "A hybrid deep reinforcement learning approach for a proactive transshipment of fresh food in the online–offline channel system," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 187(C).
More about this item
Keywords
autonomous driving vehicles; reinforcement learning; DQN; DDPG; PPO;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:15:y:2023:i:18:p:13799-:d:1240934. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.