IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v276y2020ics0306261920309855.html
   My bibliography  Save this article

Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management

Author

Listed:
  • Lu, Renzhi
  • Li, Yi-Chang
  • Li, Yuting
  • Jiang, Junhui
  • Ding, Yuemin

Abstract

With advances in smart grid technologies, demand response has played a major role in improving the reliability of grids and reduce the cost for customers. Implementing the demand response scheme for industry is more necessary than for other sectors, because its energy consumption is often considered the largest. This paper proposes a multi-agent deep reinforcement learning based demand response scheme for energy management of discrete manufacturing systems. In this regard, the industrial manufacturing system is initially formulated as a partially-observable Markov game; then, a multi-agent deep deterministic policy gradient algorithm is adopted to obtain the optimal schedule for different machines. A typical lithium-ion battery assembly manufacturing system is used to demonstrate the effectiveness of the proposed scheme. Simulation results show that the presented demand response algorithm can minimize electricity costs and maintain production tasks, as compared to a benchmark without demand response. Moreover, the performance of the multi-agent deep reinforcement learning approach against a mathematical model method is investigated.

Suggested Citation

  • Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
  • Handle: RePEc:eee:appene:v:276:y:2020:i:c:s0306261920309855
    DOI: 10.1016/j.apenergy.2020.115473
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261920309855
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2020.115473?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Vázquez-Canteli, José R. & Nagy, Zoltán, 2019. "Reinforcement learning for demand response: A review of algorithms and modeling techniques," Applied Energy, Elsevier, vol. 235(C), pages 1072-1089.
    2. Lu, Renzhi & Hong, Seung Ho & Zhang, Xiongfeng, 2018. "A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach," Applied Energy, Elsevier, vol. 220(C), pages 220-230.
    3. Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
    4. Yu, Mengmeng & Lu, Renzhi & Hong, Seung Ho, 2016. "A real-time decision model for industrial load management in a smart grid," Applied Energy, Elsevier, vol. 183(C), pages 1488-1497.
    5. Abdulaal, Ahmed & Moghaddass, Ramin & Asfour, Shihab, 2017. "Two-stage discrete-continuous multi-objective load optimization: An industrial consumer utility approach to demand response," Applied Energy, Elsevier, vol. 206(C), pages 206-221.
    6. Pallonetto, Fabiano & De Rosa, Mattia & Milano, Federico & Finn, Donal P., 2019. "Demand response algorithms for smart-grid ready residential buildings using machine learning models," Applied Energy, Elsevier, vol. 239(C), pages 1265-1282.
    7. May, Gökan & Stahl, Bojan & Taisch, Marco, 2016. "Energy management in manufacturing: Toward eco-factories of the future – A focus group study," Applied Energy, Elsevier, vol. 164(C), pages 628-638.
    8. Lynch, Muireann Á. & Nolan, Sheila & Devine, Mel T. & O’Malley, Mark, 2019. "The impacts of demand response participation in capacity markets," Applied Energy, Elsevier, vol. 250(C), pages 444-451.
    9. Kou, Peng & Liang, Deliang & Wang, Chen & Wu, Zihao & Gao, Lin, 2020. "Safe deep reinforcement learning-based constrained optimal control scheme for active distribution networks," Applied Energy, Elsevier, vol. 264(C).
    10. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    11. Wohlfarth, Katharina & Klobasa, Marian & Gutknecht, Ralph, 2020. "Demand response in the service sector – Theoretical, technical and practical potentials," Applied Energy, Elsevier, vol. 258(C).
    12. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    13. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    14. Wang, Jianxiao & Zhong, Haiwang & Ma, Ziming & Xia, Qing & Kang, Chongqing, 2017. "Review and prospect of integrated demand response in the multi-energy system," Applied Energy, Elsevier, vol. 202(C), pages 772-782.
    15. Li, Yuecheng & He, Hongwen & Khajepour, Amir & Wang, Hong & Peng, Jiankun, 2019. "Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information," Applied Energy, Elsevier, vol. 255(C).
    16. Desta, Alemayehu Addisu & Badis, Hakim & George, Laurent, 2018. "Demand response scheduling in industrial asynchronous production lines constrained by available power and production rate," Applied Energy, Elsevier, vol. 230(C), pages 1414-1424.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Li, Hongcheng & Yang, Dan & Cao, Huajun & Ge, Weiwei & Chen, Erheng & Wen, Xuanhao & Li, Chongbo, 2022. "Data-driven hybrid petri-net based energy consumption behaviour modelling for digital twin of energy-efficient manufacturing system," Energy, Elsevier, vol. 239(PC).
    2. Jonas Sievers & Thomas Blank, 2023. "A Systematic Literature Review on Data-Driven Residential and Industrial Energy Management Systems," Energies, MDPI, vol. 16(4), pages 1-21, February.
    3. Pinto, Giuseppe & Kathirgamanathan, Anjukan & Mangina, Eleni & Finn, Donal P. & Capozzoli, Alfonso, 2022. "Enhancing energy management in grid-interactive buildings: A comparison among cooperative and coordinated architectures," Applied Energy, Elsevier, vol. 310(C).
    4. Shen, Rendong & Zhong, Shengyuan & Wen, Xin & An, Qingsong & Zheng, Ruifan & Li, Yang & Zhao, Jun, 2022. "Multi-agent deep reinforcement learning optimization framework for building energy system with renewable energy," Applied Energy, Elsevier, vol. 312(C).
    5. Chao-Chung Hsu & Bi-Hai Jiang & Chun-Cheng Lin, 2023. "A Survey on Recent Applications of Artificial Intelligence and Optimization for Smart Grids in Smart Manufacturing," Energies, MDPI, vol. 16(22), pages 1-15, November.
    6. Lu, Renzhi & Bai, Ruichang & Huang, Yuan & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2021. "Data-driven real-time price-based demand response for industrial facilities energy management," Applied Energy, Elsevier, vol. 283(C).
    7. Wang, Yi & Qiu, Dawei & Strbac, Goran, 2022. "Multi-agent deep reinforcement learning for resilience-driven routing and scheduling of mobile energy storage systems," Applied Energy, Elsevier, vol. 310(C).
    8. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    9. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    10. Ochoa, Tomás & Gil, Esteban & Angulo, Alejandro & Valle, Carlos, 2022. "Multi-agent deep reinforcement learning for efficient multi-timescale bidding of a hybrid power plant in day-ahead and real-time markets," Applied Energy, Elsevier, vol. 317(C).
    11. Wang, Junya & Zhao, Qinfang & Ning, Ping & Wen, Shikun, 2024. "Greenhouse gas contribution and emission reduction potential prediction of China's aluminum industry," Energy, Elsevier, vol. 290(C).
    12. Dinh, Huy Truong & Lee, Kyu-haeng & Kim, Daehee, 2022. "Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization," Applied Energy, Elsevier, vol. 321(C).
    13. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    14. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    15. Yun, Lingxiang & Li, Lin & Ma, Shuaiyin, 2022. "Demand response for manufacturing systems considering the implications of fast-charging battery powered material handling equipment," Applied Energy, Elsevier, vol. 310(C).
    16. Lu, Renzhi & Bai, Ruichang & Ding, Yuemin & Wei, Min & Jiang, Junhui & Sun, Mingyang & Xiao, Feng & Zhang, Hai-Tao, 2021. "A hybrid deep learning-based online energy management scheme for industrial microgrid," Applied Energy, Elsevier, vol. 304(C).
    17. Mahdi Khodayar & Jacob Regan, 2023. "Deep Neural Networks in Power Systems: A Review," Energies, MDPI, vol. 16(12), pages 1-38, June.
    18. Woltmann, Stefan & Kittel, Julia, 2022. "Development and implementation of multi-agent systems for demand response aggregators in an industrial context," Applied Energy, Elsevier, vol. 314(C).
    19. Yu Pu & Fang Li & Shahin Rahimifard, 2024. "Multi-Agent Reinforcement Learning for Job Shop Scheduling in Dynamic Environments," Sustainability, MDPI, vol. 16(8), pages 1-26, April.
    20. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    21. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    22. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    23. Zhu, Dafeng & Yang, Bo & Liu, Yuxiang & Wang, Zhaojian & Ma, Kai & Guan, Xinping, 2022. "Energy management based on multi-agent deep reinforcement learning for a multi-energy industrial park," Applied Energy, Elsevier, vol. 311(C).
    24. Zhou, Yanting & Ma, Zhongjing & Shi, Xingyu & Zou, Suli, 2024. "Multi-agent optimal scheduling for integrated energy system considering the global carbon emission constraint," Energy, Elsevier, vol. 288(C).
    25. Golmohamadi, Hessam, 2022. "Demand-side management in industrial sector: A review of heavy industries," Renewable and Sustainable Energy Reviews, Elsevier, vol. 156(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lu, Renzhi & Bai, Ruichang & Ding, Yuemin & Wei, Min & Jiang, Junhui & Sun, Mingyang & Xiao, Feng & Zhang, Hai-Tao, 2021. "A hybrid deep learning-based online energy management scheme for industrial microgrid," Applied Energy, Elsevier, vol. 304(C).
    2. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
    3. Lu, Renzhi & Bai, Ruichang & Huang, Yuan & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2021. "Data-driven real-time price-based demand response for industrial facilities energy management," Applied Energy, Elsevier, vol. 283(C).
    4. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    5. Ibrahim, Muhammad Sohail & Dong, Wei & Yang, Qiang, 2020. "Machine learning driven smart electric power systems: Current trends and new perspectives," Applied Energy, Elsevier, vol. 272(C).
    6. Kirchem, Dana & Lynch, Muireann Á. & Bertsch, Valentin & Casey, Eoin, 2020. "Modelling demand response with process models and energy systems models: Potential applications for wastewater treatment within the energy-water nexus," Applied Energy, Elsevier, vol. 260(C).
    7. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    8. Du, Yan & Zandi, Helia & Kotevska, Olivera & Kurte, Kuldeep & Munk, Jeffery & Amasyali, Kadir & Mckee, Evan & Li, Fangxing, 2021. "Intelligent multi-zone residential HVAC control strategy based on deep reinforcement learning," Applied Energy, Elsevier, vol. 281(C).
    9. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    10. Wen, Lulu & Zhou, Kaile & Li, Jun & Wang, Shanyong, 2020. "Modified deep learning and reinforcement learning for an incentive-based demand response model," Energy, Elsevier, vol. 205(C).
    11. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    12. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).
    13. Zeng, Huibin & Shao, Bilin & Dai, Hongbin & Tian, Ning & Zhao, Wei, 2023. "Incentive-based demand response strategies for natural gas considering carbon emissions and load volatility," Applied Energy, Elsevier, vol. 348(C).
    14. Zhang, Xiongfeng & Lu, Renzhi & Jiang, Junhui & Hong, Seung Ho & Song, Won Seok, 2021. "Testbed implementation of reinforcement learning-based demand response energy management system," Applied Energy, Elsevier, vol. 297(C).
    15. Wang, Yi & Qiu, Dawei & Sun, Mingyang & Strbac, Goran & Gao, Zhiwei, 2023. "Secure energy management of multi-energy microgrid: A physical-informed safe reinforcement learning approach," Applied Energy, Elsevier, vol. 335(C).
    16. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    17. Daniel Egan & Qilun Zhu & Robert Prucka, 2023. "A Review of Reinforcement Learning-Based Powertrain Controllers: Effects of Agent Selection for Mixed-Continuity Control and Reward Formulation," Energies, MDPI, vol. 16(8), pages 1-31, April.
    18. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    19. Xu, Fangyuan & Zhu, Weidong & Wang, Yi Fei & Lai, Chun Sing & Yuan, Haoliang & Zhao, Yujia & Guo, Siming & Fu, Zhengxin, 2022. "A new deregulated demand response scheme for load over-shifting city in regulated power market," Applied Energy, Elsevier, vol. 311(C).
    20. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:276:y:2020:i:c:s0306261920309855. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.