IDEAS home Printed from https://ideas.repec.org/a/eee/reensy/v225y2022ics0951832022002575.html
   My bibliography  Save this article

A deep reinforcement learning approach for rail renewal and maintenance planning

Author

Listed:
  • Mohammadi, Reza
  • He, Qing

Abstract

Developing optimal rail renewal and maintenance planning that minimizes long-term costs and risks of failure is of paramount importance for railroad industry. However, intrinsic uncertainty, presence of constraints, and curse of dimensionality induce a challenging engineering problem. Despite the potential capabilities of Deep Reinforcement Learning (DRL), there is very limited research in the area of employing DRL methods to solve renewal and maintenance planning. Inspired by the recent advances in the area of DRL, a DRL-based approach is developed to optimize maintenance and renewal planning. This approach optimizes renewal and maintenance planning over a planning horizon by considering cost-effectiveness and risk reduction. We consider both predictive and condition-based maintenance tasks and incorporate time, resource, and related engineering constraints into the model to capture realistic features of the problem. Available historic inspection and maintenance data is used to simulate the rail environment and feed into DRL method. A Double Deep Q-Network (DDQN) is applied to overcome the uncertainty of the environment. In addition, prioritized replay memory is applied which improves the feedback from the improvement by giving high weight to important experiences of the agent. The proposed DDQN approach is applied to a Class I railroad network to demonstrate the applicability and efficiency the approach. Our analyses demonstrate that the proposed approach develops an optimal policy that not only reduces budget consumption but also improves the reliability and safety of the network.

Suggested Citation

  • Mohammadi, Reza & He, Qing, 2022. "A deep reinforcement learning approach for rail renewal and maintenance planning," Reliability Engineering and System Safety, Elsevier, vol. 225(C).
  • Handle: RePEc:eee:reensy:v:225:y:2022:i:c:s0951832022002575
    DOI: 10.1016/j.ress.2022.108615
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0951832022002575
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ress.2022.108615?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Kim, A. & Yang, Y. & Lessmann, S. & Ma, T. & Sung, M.-C. & Johnson, J.E.V., 2020. "Can deep learning predict risky retail investors? A case study in financial risk behavior forecasting," European Journal of Operational Research, Elsevier, vol. 283(1), pages 217-234.
    2. Sedghi, Mahdieh & Kauppila, Osmo & Bergquist, Bjarne & Vanhatalo, Erik & Kulahci, Murat, 2021. "A taxonomy of railway track maintenance planning and scheduling: A review and research trends," Reliability Engineering and System Safety, Elsevier, vol. 215(C).
    3. Liu, Yu & Chen, Yiming & Jiang, Tao, 2020. "Dynamic selective maintenance optimization for multi-state systems over a finite horizon: A deep reinforcement learning approach," European Journal of Operational Research, Elsevier, vol. 283(1), pages 166-181.
    4. Zhang, Nailong & Si, Wujun, 2020. "Deep reinforcement learning for condition-based maintenance planning of multi-component systems under dependent competing risks," Reliability Engineering and System Safety, Elsevier, vol. 203(C).
    5. Rocchetta, R. & Bellani, L. & Compare, M. & Zio, E. & Patelli, E., 2019. "A reinforcement learning framework for optimal operation and maintenance of power grids," Applied Energy, Elsevier, vol. 241(C), pages 291-301.
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    7. Andriotis, C.P. & Papakonstantinou, K.G., 2019. "Managing engineering systems with large state and action spaces through deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 191(C).
    8. Xiang, Zhengliang & Bao, Yuequan & Tang, Zhiyi & Li, Hui, 2020. "Deep reinforcement learning-based sampling method for structural reliability assessment," Reliability Engineering and System Safety, Elsevier, vol. 199(C).
    9. Xiao Wang & Hongwei Wang & Chao Qi, 2016. "Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system," Journal of Intelligent Manufacturing, Springer, vol. 27(2), pages 325-333, April.
    10. Yang, Hongbing & Li, Wenchao & Wang, Bin, 2021. "Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 214(C).
    11. Zhou, Yifan & Li, Bangcheng & Lin, Tian Ran, 2022. "Maintenance optimisation of multicomponent systems using hierarchical coordinated reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 217(C).
    12. Xiang, Yisha, 2013. "Joint optimization of X¯ control chart and preventive maintenance policies: A discrete-time Markov chain approach," European Journal of Operational Research, Elsevier, vol. 229(2), pages 382-390.
    13. Andriotis, C.P. & Papakonstantinou, K.G., 2021. "Deep reinforcement learning driven inspection and maintenance planning under incomplete information and constraints," Reliability Engineering and System Safety, Elsevier, vol. 212(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Lee, Dongkyu & Song, Junho, 2023. "Risk-informed operation and maintenance of complex lifeline systems using parallelized multi-agent deep Q-network," Reliability Engineering and System Safety, Elsevier, vol. 239(C).
    2. Morato, P.G. & Andriotis, C.P. & Papakonstantinou, K.G. & Rigo, P., 2023. "Inference and dynamic decision-making for deteriorating systems with probabilistic dependencies through Bayesian networks and deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    3. Li, Haoqian & Wang, Yong & Zeng, Jing & Li, Fansong & Yang, Zhenhuan & Mei, Guiming & Ye, Yunguang, 2024. "Virtual point tracking method for online detection of relative wheel-rail displacement of railway vehicles," Reliability Engineering and System Safety, Elsevier, vol. 246(C).
    4. Yan, Dongyang & Li, Keping & Zhu, Qiaozhen & Liu, Yanyan, 2023. "A railway accident prevention method based on reinforcement learning – Active preventive strategy by multi-modal data," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    5. Tseremoglou, Iordanis & Santos, Bruno F., 2024. "Condition-Based Maintenance scheduling of an aircraft fleet under partial observability: A Deep Reinforcement Learning approach," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    6. Lee, Juseong & Mitici, Mihaela, 2023. "Deep reinforcement learning for predictive aircraft maintenance using probabilistic Remaining-Useful-Life prognostics," Reliability Engineering and System Safety, Elsevier, vol. 230(C).
    7. Yang, Sen & Zhang, Yi & Lu, Xinzheng & Guo, Wei & Miao, Huiquan, 2024. "Multi-agent deep reinforcement learning based decision support model for resilient community post-hazard recovery," Reliability Engineering and System Safety, Elsevier, vol. 242(C).
    8. Liu, Xuan & Meng, Huixing & An, Xu & Xing, Jinduo, 2024. "Integration of functional resonance analysis method and reinforcement learning for updating and optimizing emergency procedures in variable environments," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    9. Lee, Jun S. & Yeo, In-Ho & Bae, Younghoon, 2024. "A stochastic track maintenance scheduling model based on deep reinforcement learning approaches," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    10. Saleh, Ali & Chiachío, Manuel & Salas, Juan Fernández & Kolios, Athanasios, 2023. "Self-adaptive optimized maintenance of offshore wind turbines by intelligent Petri nets," Reliability Engineering and System Safety, Elsevier, vol. 231(C).
    11. Najafi, Seyedvahid & Lee, Chi-Guhn, 2023. "A deep reinforcement learning approach for repair-based maintenance of multi-unit systems using proportional hazards model," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    12. Rokhforoz, Pegah & Montazeri, Mina & Fink, Olga, 2023. "Safe multi-agent deep reinforcement learning for joint bidding and maintenance scheduling of generation units," Reliability Engineering and System Safety, Elsevier, vol. 232(C).
    13. Liu, Hengchang & Li, Bo & Yao, Fengming & Hu, Gexi & Xie, Lei, 2024. "Maintenance optimization of multi-unit balanced systems using deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 244(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Najafi, Seyedvahid & Lee, Chi-Guhn, 2023. "A deep reinforcement learning approach for repair-based maintenance of multi-unit systems using proportional hazards model," Reliability Engineering and System Safety, Elsevier, vol. 234(C).
    2. Zhou, Yifan & Li, Bangcheng & Lin, Tian Ran, 2022. "Maintenance optimisation of multicomponent systems using hierarchical coordinated reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 217(C).
    3. Lee, Dongkyu & Song, Junho, 2023. "Risk-informed operation and maintenance of complex lifeline systems using parallelized multi-agent deep Q-network," Reliability Engineering and System Safety, Elsevier, vol. 239(C).
    4. Azar, Kamyar & Hajiakhondi-Meybodi, Zohreh & Naderkhani, Farnoosh, 2022. "Semi-supervised clustering-based method for fault diagnosis and prognosis: A case study," Reliability Engineering and System Safety, Elsevier, vol. 222(C).
    5. Morato, P.G. & Andriotis, C.P. & Papakonstantinou, K.G. & Rigo, P., 2023. "Inference and dynamic decision-making for deteriorating systems with probabilistic dependencies through Bayesian networks and deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    6. Lee, Juseong & Mitici, Mihaela, 2023. "Deep reinforcement learning for predictive aircraft maintenance using probabilistic Remaining-Useful-Life prognostics," Reliability Engineering and System Safety, Elsevier, vol. 230(C).
    7. Cheng, Jianda & Cheng, Minghui & Liu, Yan & Wu, Jun & Li, Wei & Frangopol, Dan M., 2024. "Knowledge transfer for adaptive maintenance policy optimization in engineering fleets based on meta-reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 247(C).
    8. Lee, Jun S. & Yeo, In-Ho & Bae, Younghoon, 2024. "A stochastic track maintenance scheduling model based on deep reinforcement learning approaches," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    9. Zheng, Meimei & Su, Zhiyun & Wang, Dong & Pan, Ershun, 2024. "Joint maintenance and spare part ordering from multiple suppliers for multicomponent systems using a deep reinforcement learning algorithm," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    10. Xu, Zhaoyi & Saleh, Joseph Homer, 2021. "Machine learning for reliability engineering and safety applications: Review of current status and future opportunities," Reliability Engineering and System Safety, Elsevier, vol. 211(C).
    11. Mikhail, Mina & Ouali, Mohamed-Salah & Yacout, Soumaya, 2024. "A data-driven methodology with a nonparametric reliability method for optimal condition-based maintenance strategies," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    12. Guan, Xiaoshu & Xiang, Zhengliang & Bao, Yuequan & Li, Hui, 2022. "Structural dominant failure modes searching method based on deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 219(C).
    13. Ye, Zhenggeng & Cai, Zhiqiang & Yang, Hui & Si, Shubin & Zhou, Fuli, 2023. "Joint optimization of maintenance and quality inspection for manufacturing networks based on deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 236(C).
    14. Nguyen, Van-Thai & Do, Phuc & Vosin, Alexandre & Iung, Benoit, 2022. "Artificial-intelligence-based maintenance decision-making and optimization for multi-state component systems," Reliability Engineering and System Safety, Elsevier, vol. 228(C).
    15. Guan, Xiaoshu & Sun, Huabin & Hou, Rongrong & Xu, Yang & Bao, Yuequan & Li, Hui, 2023. "A deep reinforcement learning method for structural dominant failure modes searching based on self-play strategy," Reliability Engineering and System Safety, Elsevier, vol. 233(C).
    16. Anwar, Ghazanfar Ali & Zhang, Xiaoge, 2024. "Deep reinforcement learning for intelligent risk optimization of buildings under hazard," Reliability Engineering and System Safety, Elsevier, vol. 247(C).
    17. Tseremoglou, Iordanis & Santos, Bruno F., 2024. "Condition-Based Maintenance scheduling of an aircraft fleet under partial observability: A Deep Reinforcement Learning approach," Reliability Engineering and System Safety, Elsevier, vol. 241(C).
    18. Ferreira Neto, Waldomiro Alves & Virgínio Cavalcante, Cristiano Alexandre & Do, Phuc, 2024. "Deep reinforcement learning for maintenance optimization of a scrap-based steel production line," Reliability Engineering and System Safety, Elsevier, vol. 249(C).
    19. Hamida, Zachary & Goulet, James-A., 2023. "Hierarchical reinforcement learning for transportation infrastructure maintenance planning," Reliability Engineering and System Safety, Elsevier, vol. 235(C).
    20. Yang, Hongbing & Li, Wenchao & Wang, Bin, 2021. "Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 214(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:reensy:v:225:y:2022:i:c:s0951832022002575. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/reliability-engineering-and-system-safety .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.