IDEAS home Printed from https://ideas.repec.org/a/sae/risrel/v238y2024i1p193-203.html
   My bibliography  Save this article

A resilient network recovery framework against cascading failures with deep graph learning

Author

Listed:
  • Jian Zhou
  • Weijian Zheng
  • Dali Wang
  • David W. Coit

Abstract

Because of the increasing importance and dependencies of infrastructure networks and the potential for massive cascading failures in real-world network systems, maintenance optimization to effectively reduce system performance loss caused by diverse disruptions is of significant interest among researchers and practitioners. In this work, a new recovery framework was developed to rapidly identify important system components for maintenance to improve network resilience against cascading failures. This work provides distinct advantages to determine an optimal maintenance priority by combining real-time network structure importance with other maintenance prioritization based on customer preference. This approach adopts structural graph embedding and deep reinforcement learning to extract real-time network topology information (such as minimum vertex cover) to update the maintenance priority during the recovery process. Based on the case studies on synthetic networks and a US airport network, the proposed recovery framework with real-time network topology awareness shows better performance than other maintenance prioritization strategies regarding resilience enhancement. This work improves the understanding of how the changing network structure influences maintenance effects. It also provides insights of the practical usefulness of advanced deep learning on helping optimal maintenance prioritization to effectively reduce the intensity and extent of cascading failures.

Suggested Citation

  • Jian Zhou & Weijian Zheng & Dali Wang & David W. Coit, 2024. "A resilient network recovery framework against cascading failures with deep graph learning," Journal of Risk and Reliability, , vol. 238(1), pages 193-203, February.
  • Handle: RePEc:sae:risrel:v:238:y:2024:i:1:p:193-203
    DOI: 10.1177/1748006X221128869
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/1748006X221128869
    Download Restriction: no

    File URL: https://libkey.io/10.1177/1748006X221128869?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Lee, D.-S. & Goh, K.-I. & Kahng, B. & Kim, D., 2004. "Sandpile avalanche dynamics on scale-free networks," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 338(1), pages 84-91.
    2. Li, Ruiying & Gao, Ying, 2022. "On the component resilience importance measures for infrastructure systems," International Journal of Critical Infrastructure Protection, Elsevier, vol. 36(C).
    3. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    4. Benjamin Schäfer & Dirk Witthaut & Marc Timme & Vito Latora, 2018. "Author Correction: Dynamically induced cascading failures in power grids," Nature Communications, Nature, vol. 9(1), pages 1-1, December.
    5. Yasser Almoghathawi & Andrés D. González & Kash Barker, 2021. "Exploring Recovery Strategies for Optimal Interdependent Infrastructure Network Resilience," Networks and Spatial Economics, Springer, vol. 21(1), pages 229-260, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Sun, Qin & Li, Hongxu & Zhong, Yuanfu & Ren, Kezhou & Zhang, Yingchao, 2024. "Deep reinforcement learning-based resilience enhancement strategy of unmanned weapon system-of-systems under inevitable interferences," Reliability Engineering and System Safety, Elsevier, vol. 242(C).
    2. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    3. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    4. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    5. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    6. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    7. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.
    8. Yang, Ting & Zhao, Liyuan & Li, Wei & Zomaya, Albert Y., 2021. "Dynamic energy dispatch strategy for integrated energy system based on improved deep reinforcement learning," Energy, Elsevier, vol. 235(C).
    9. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    10. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    11. Lahtinen, Jani & Kertész, János & Kaski, Kimmo, 2005. "Sandpiles on Watts–Strogatz type small-worlds," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 349(3), pages 535-547.
    12. Ouyang, Bo & Teng, Zhaosheng & Tang, Qiu, 2016. "Dynamics in local influence cascading models," Chaos, Solitons & Fractals, Elsevier, vol. 93(C), pages 182-186.
    13. Ande Chang & Yuting Ji & Chunguang Wang & Yiming Bie, 2024. "CVDMARL: A Communication-Enhanced Value Decomposition Multi-Agent Reinforcement Learning Traffic Signal Control Method," Sustainability, MDPI, vol. 16(5), pages 1-17, March.
    14. Sun, Hongchang & Niu, Yanlei & Li, Chengdong & Zhou, Changgeng & Zhai, Wenwen & Chen, Zhe & Wu, Hao & Niu, Lanqiang, 2022. "Energy consumption optimization of building air conditioning system via combining the parallel temporal convolutional neural network and adaptive opposition-learning chimp algorithm," Energy, Elsevier, vol. 259(C).
    15. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    16. He, Jing & Liu, Xinglu & Duan, Qiyao & Chan, Wai Kin (Victor) & Qi, Mingyao, 2023. "Reinforcement learning for multi-item retrieval in the puzzle-based storage system," European Journal of Operational Research, Elsevier, vol. 305(2), pages 820-837.
    17. Holger Mohr & Katharina Zwosta & Dimitrije Markovic & Sebastian Bitzer & Uta Wolfensteller & Hannes Ruge, 2018. "Deterministic response strategies in a trial-and-error learning task," PLOS Computational Biology, Public Library of Science, vol. 14(11), pages 1-19, November.
    18. Zhang, Tianhao & Dong, Zhe & Huang, Xiaojin, 2024. "Multi-objective optimization of thermal power and outlet steam temperature for a nuclear steam supply system with deep reinforcement learning," Energy, Elsevier, vol. 286(C).
    19. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    20. Sebastian Jaimungal, 2022. "Reinforcement learning and stochastic optimisation," Finance and Stochastics, Springer, vol. 26(1), pages 103-129, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:risrel:v:238:y:2024:i:1:p:193-203. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.