IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v15y2022i8p2834-d792850.html
   My bibliography  Save this article

Comparison of Deep Reinforcement Learning and PID Controllers for Automatic Cold Shutdown Operation

Author

Listed:
  • Daeil Lee

    (Department of Nuclear Engineering, Chosun University, Dong-gu, Gwangju 61452, Korea)

  • Seoryong Koo

    (Korea Atomic Energy Research Institute, Yuseong-gu, Daejeon 34057, Korea)

  • Inseok Jang

    (Korea Atomic Energy Research Institute, Yuseong-gu, Daejeon 34057, Korea)

  • Jonghyun Kim

    (Department of Nuclear Engineering, Chosun University, Dong-gu, Gwangju 61452, Korea)

Abstract

Many industries apply traditional controllers to automate manual control. In recent years, artificial intelligence controllers applied with deep-learning techniques have been suggested as advanced controllers that can achieve goals from many industrial domains, such as humans. Deep reinforcement learning (DRL) is a powerful method for these controllers to learn how to achieve their specific operational goals. As DRL controllers learn through sampling from a target system, they can overcome the limitations of traditional controllers, such as proportional-integral-derivative (PID) controllers. In nuclear power plants (NPPs), automatic systems can manage components during full-power operation. In contrast, startup and shutdown operations are less automated and are typically performed by operators. This study suggests DRL-based and PID-based controllers for cold shutdown operations, which are a part of startup operations. By comparing the suggested controllers, this study aims to verify that learning-based controllers can overcome the limitations of traditional controllers and achieve operational goals with minimal manipulation. First, to identify the required components, operational goals, and inputs/outputs of operations, this study analyzed the general operating procedures for cold shutdown operations. Then, PID- and DRL-based controllers are designed. The PID-based controller consists of PID controllers that are well-tuned using the Ziegler–Nichols rule. The DRL-based controller with long short-term memory (LSTM) is trained with a soft actor-critic algorithm that can reduce the training time by using distributed prioritized experience replay and distributed learning. The LSTM can process a plant time-series data to generate control signals. Subsequently, the suggested controllers were validated using an NPP simulator during the cold shutdown operation. Finally, this study discusses the operational performance by comparing PID- and DRL-based controllers.

Suggested Citation

  • Daeil Lee & Seoryong Koo & Inseok Jang & Jonghyun Kim, 2022. "Comparison of Deep Reinforcement Learning and PID Controllers for Automatic Cold Shutdown Operation," Energies, MDPI, vol. 15(8), pages 1-25, April.
  • Handle: RePEc:gam:jeners:v:15:y:2022:i:8:p:2834-:d:792850
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/15/8/2834/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/15/8/2834/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lian, Renzong & Peng, Jiankun & Wu, Yuankai & Tan, Huachun & Zhang, Hailong, 2020. "Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle," Energy, Elsevier, vol. 197(C).
    2. Yang, Jaemin & Kim, Jonghyun, 2020. "Accident diagnosis algorithm with untrained accident identification during power-increasing operation," Reliability Engineering and System Safety, Elsevier, vol. 202(C).
    3. Du, Guodong & Zou, Yuan & Zhang, Xudong & Liu, Teng & Wu, Jinlong & He, Dingbo, 2020. "Deep reinforcement learning based energy management for a hybrid electric vehicle," Energy, Elsevier, vol. 201(C).
    4. Kazmi, Hussain & Mehmood, Fahad & Lodeweyckx, Stefan & Driesen, Johan, 2018. "Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems," Energy, Elsevier, vol. 144(C), pages 159-168.
    5. Rocchetta, R. & Bellani, L. & Compare, M. & Zio, E. & Patelli, E., 2019. "A reinforcement learning framework for optimal operation and maintenance of power grids," Applied Energy, Elsevier, vol. 241(C), pages 291-301.
    6. Aitor Saenz-Aguirre & Ekaitz Zulueta & Unai Fernandez-Gamiz & Javier Lozano & Jose Manuel Lopez-Guede, 2019. "Artificial Neural Network Based Reinforcement Learning for Wind Turbine Yaw Control," Energies, MDPI, vol. 12(3), pages 1-17, January.
    7. Dong, Zhe & Huang, Xiaojin & Dong, Yujie & Zhang, Zuoyi, 2020. "Multilayer perception based reinforcement learning supervisory control of energy systems with application to a nuclear steam supply system," Applied Energy, Elsevier, vol. 259(C).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Abidur Rahman Sagor & Md Abu Talha & Shameem Ahmad & Tofael Ahmed & Mohammad Rafiqul Alam & Md. Rifat Hazari & G. M. Shafiullah, 2024. "Pelican Optimization Algorithm-Based Proportional–Integral–Derivative Controller for Superior Frequency Regulation in Interconnected Multi-Area Power Generating System," Energies, MDPI, vol. 17(13), pages 1-24, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    2. Zhu, Tao & Wills, Richard G.A. & Lot, Roberto & Ruan, Haijun & Jiang, Zhihao, 2021. "Adaptive energy management of a battery-supercapacitor energy storage system for electric vehicles based on flexible perception and neural network fitting," Applied Energy, Elsevier, vol. 292(C).
    3. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    4. Jiang, Yue & Meng, Hao & Chen, Guanpeng & Yang, Congnan & Xu, Xiaojun & Zhang, Lei & Xu, Haijun, 2022. "Differential-steering based path tracking control and energy-saving torque distribution strategy of 6WID unmanned ground vehicle," Energy, Elsevier, vol. 254(PA).
    5. Ju, Fei & Zhuang, Weichao & Wang, Liangmo & Zhang, Zhe, 2020. "Comparison of four-wheel-drive hybrid powertrain configurations," Energy, Elsevier, vol. 209(C).
    6. Miranda, Matheus H.R. & Silva, Fabrício L. & Lourenço, Maria A.M. & Eckert, Jony J. & Silva, Ludmila C.A., 2022. "Vehicle drivetrain and fuzzy controller optimization using a planar dynamics simulation based on a real-world driving cycle," Energy, Elsevier, vol. 257(C).
    7. Penghui Qiang & Peng Wu & Tao Pan & Huaiquan Zang, 2021. "Real-Time Approximate Equivalent Consumption Minimization Strategy Based on the Single-Shaft Parallel Hybrid Powertrain," Energies, MDPI, vol. 14(23), pages 1-22, November.
    8. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    9. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    10. Robert Jane & Tae Young Kim & Samantha Rose & Emily Glass & Emilee Mossman & Corey James, 2022. "Developing AI/ML Based Predictive Capabilities for a Compression Ignition Engine Using Pseudo Dynamometer Data," Energies, MDPI, vol. 15(21), pages 1-49, October.
    11. Yang, Dongpo & Liu, Tong & Song, Dafeng & Zhang, Xuanming & Zeng, Xiaohua, 2023. "A real time multi-objective optimization Guided-MPC strategy for power-split hybrid electric bus based on velocity prediction," Energy, Elsevier, vol. 276(C).
    12. Zhengyu Yao & Hwan-Sik Yoon & Yang-Ki Hong, 2023. "Control of Hybrid Electric Vehicle Powertrain Using Offline-Online Hybrid Reinforcement Learning," Energies, MDPI, vol. 16(2), pages 1-18, January.
    13. Wang, Yue & Li, Keqiang & Zeng, Xiaohua & Gao, Bolin & Hong, Jichao, 2023. "Investigation of novel intelligent energy management strategies for connected HEB considering global planning of fixed-route information," Energy, Elsevier, vol. 263(PB).
    14. Li, Shuangqi & He, Hongwen & Zhao, Pengfei, 2021. "Energy management for hybrid energy storage system in electric vehicle: A cyber-physical system perspective," Energy, Elsevier, vol. 230(C).
    15. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    16. Yao, Yongming & Wang, Jie & Zhou, Zhicong & Li, Hang & Liu, Huiying & Li, Tianyu, 2023. "Grey Markov prediction-based hierarchical model predictive control energy management for fuel cell/battery hybrid unmanned aerial vehicles," Energy, Elsevier, vol. 262(PA).
    17. Dong, Peng & Zhao, Junwei & Liu, Xuewu & Wu, Jian & Xu, Xiangyang & Liu, Yanfang & Wang, Shuhan & Guo, Wei, 2022. "Practical application of energy management strategy for hybrid electric vehicles based on intelligent and connected technologies: Development stages, challenges, and future trends," Renewable and Sustainable Energy Reviews, Elsevier, vol. 170(C).
    18. Guo, Ningyuan & Zhang, Xudong & Zou, Yuan & Guo, Lingxiong & Du, Guodong, 2021. "Real-time predictive energy management of plug-in hybrid electric vehicles for coordination of fuel economy and battery degradation," Energy, Elsevier, vol. 214(C).
    19. Kong, Yan & Xu, Nan & Liu, Qiao & Sui, Yan & Yue, Fenglai, 2023. "A data-driven energy management method for parallel PHEVs based on action dependent heuristic dynamic programming (ADHDP) model," Energy, Elsevier, vol. 265(C).
    20. Liu, Bo & Sun, Chao & Wang, Bo & Liang, Weiqiang & Ren, Qiang & Li, Junqiu & Sun, Fengchun, 2022. "Bi-level convex optimization of eco-driving for connected Fuel Cell Hybrid Electric Vehicles through signalized intersections," Energy, Elsevier, vol. 252(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:15:y:2022:i:8:p:2834-:d:792850. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.