IDEAS home Printed from https://ideas.repec.org/a/eee/appene/v321y2022ics0306261922007231.html
   My bibliography  Save this article

Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization

Author

Listed:
  • Dinh, Huy Truong
  • Lee, Kyu-haeng
  • Kim, Daehee

Abstract

The demand response (DR) program of a traditional home energy management system (HEMS) usually controls or schedules appliances to monitor energy usage, minimize energy cost, and maximize user comfort. In this study, instead of interfering with appliances and changing residents’ behavior, the proposed hour-ahead DR strategy first learns the appliance usage behavior of residents; subsequently, based on this knowledge, it silently controls the energy storage system (ESS) and renewable energy system (RES) to minimize the daily energy cost. To accomplish the goal, the proposed deep neural networks (DNNs) of this DR approximate the MILP optimization using supervised learning. The training datasets are created from the optimal outputs of an MILP solver using historical data. After training, in each time slot, these DNNs are used to control the ESS and RES using the real-time data of the surrounding environment. For comparison, we develop two different strategies, namely, the multi-agent reinforcement learning-based strategy, which is an hour-ahead strategy, and the forecast-based MILP strategy, which is a day-ahead strategy. For evaluation and verification, the proposed approaches are applied to three different real-world homes with real-world real-time global horizontal irradiation and prices. Numerical results verify the effectiveness and superiority of the proposed MILP-based supervised learning strategy, in terms of the daily energy cost.

Suggested Citation

  • Dinh, Huy Truong & Lee, Kyu-haeng & Kim, Daehee, 2022. "Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization," Applied Energy, Elsevier, vol. 321(C).
  • Handle: RePEc:eee:appene:v:321:y:2022:i:c:s0306261922007231
    DOI: 10.1016/j.apenergy.2022.119382
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0306261922007231
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.apenergy.2022.119382?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Elkazaz, Mahmoud & Sumner, Mark & Naghiyev, Eldar & Pholboon, Seksak & Davies, Richard & Thomas, David, 2020. "A hierarchical two-stage energy management for a home microgrid using model predictive and real-time controllers," Applied Energy, Elsevier, vol. 269(C).
    2. Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
    3. Terlouw, Tom & AlSkaif, Tarek & Bauer, Christian & van Sark, Wilfried, 2019. "Optimal energy management in all-electric residential energy systems with heat and electricity storage," Applied Energy, Elsevier, vol. 254(C).
    4. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    5. Fridgen, Gilbert & Kahlen, Micha & Ketter, Wolfgang & Rieger, Alexander & Thimmel, Markus, 2018. "One rate does not fit all: An empirical analysis of electricity tariffs for residential microgrids," Applied Energy, Elsevier, vol. 210(C), pages 800-814.
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    7. Adnan Ahmad & Asif Khan & Nadeem Javaid & Hafiz Majid Hussain & Wadood Abdul & Ahmad Almogren & Atif Alamri & Iftikhar Azim Niaz, 2017. "An Optimized Home Energy Management System with Integrated Renewable Energy and Storage Resources," Energies, MDPI, vol. 10(4), pages 1-35, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Muhammad Irfan & Sara Deilami & Shujuan Huang & Binesh Puthen Veettil, 2023. "Rooftop Solar and Electric Vehicle Integration for Smart, Sustainable Homes: A Comprehensive Review," Energies, MDPI, vol. 16(21), pages 1-29, October.
    2. Ren, Kezheng & Liu, Jun & Wu, Zeyang & Liu, Xinglei & Nie, Yongxin & Xu, Haitao, 2024. "A data-driven DRL-based home energy management system optimization framework considering uncertain household parameters," Applied Energy, Elsevier, vol. 355(C).
    3. Nedim Tutkun & Luigi Scarcello & Carlo Mastroianni, 2023. "Improved Low-Cost Home Energy Management Considering User Preferences with Photovoltaic and Energy-Storage Systems," Sustainability, MDPI, vol. 15(11), pages 1-25, May.
    4. Yang, Miao & Ding, Tao & Chang, Xinyue & Xue, Yixun & Ge, Huaichang & Jia, Wenhao & Du, Sijun & Zhang, Hongji, 2024. "Analysis of equivalent energy storage for integrated electricity-heat system," Energy, Elsevier, vol. 303(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    3. Zhang, Yang & Yang, Qingyu & Li, Donghe & An, Dou, 2022. "A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility," Applied Energy, Elsevier, vol. 323(C).
    4. Ussama Assad & Muhammad Arshad Shehzad Hassan & Umar Farooq & Asif Kabir & Muhammad Zeeshan Khan & S. Sabahat H. Bukhari & Zain ul Abidin Jaffri & Judit Oláh & József Popp, 2022. "Smart Grid, Demand Response and Optimization: A Critical Review of Computational Methods," Energies, MDPI, vol. 15(6), pages 1-36, March.
    5. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    6. Lu, Renzhi & Bai, Ruichang & Ding, Yuemin & Wei, Min & Jiang, Junhui & Sun, Mingyang & Xiao, Feng & Zhang, Hai-Tao, 2021. "A hybrid deep learning-based online energy management scheme for industrial microgrid," Applied Energy, Elsevier, vol. 304(C).
    7. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    8. Ajagekar, Akshay & Decardi-Nelson, Benjamin & You, Fengqi, 2024. "Energy management for demand response in networked greenhouses with multi-agent deep reinforcement learning," Applied Energy, Elsevier, vol. 355(C).
    9. Chao-Chung Hsu & Bi-Hai Jiang & Chun-Cheng Lin, 2023. "A Survey on Recent Applications of Artificial Intelligence and Optimization for Smart Grids in Smart Manufacturing," Energies, MDPI, vol. 16(22), pages 1-15, November.
    10. Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
    11. Amit Shewale & Anil Mokhade & Nitesh Funde & Neeraj Dhanraj Bokde, 2022. "A Survey of Efficient Demand-Side Management Techniques for the Residential Appliance Scheduling Problem in Smart Homes," Energies, MDPI, vol. 15(8), pages 1-34, April.
    12. Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
    13. Md Masud Rana & Akhlaqur Rahman & Moslem Uddin & Md Rasel Sarkar & SK. A. Shezan & C M F S Reza & Md. Fatin Ishraque & Mohammad Belayet Hossain, 2022. "Efficient Energy Distribution for Smart Household Applications," Energies, MDPI, vol. 15(6), pages 1-19, March.
    14. Ma, Siyu & Liu, Hui & Wang, Ni & Huang, Lidong & Goh, Hui Hwang, 2023. "Incentive-based demand response under incomplete information based on the deep deterministic policy gradient," Applied Energy, Elsevier, vol. 351(C).
    15. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    16. Zeng, Lanting & Qiu, Dawei & Sun, Mingyang, 2022. "Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks," Applied Energy, Elsevier, vol. 324(C).
    17. Lu, Renzhi & Bai, Ruichang & Huang, Yuan & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2021. "Data-driven real-time price-based demand response for industrial facilities energy management," Applied Energy, Elsevier, vol. 283(C).
    18. Eduardo J. Salazar & Mauro Jurado & Mauricio E. Samper, 2023. "Reinforcement Learning-Based Pricing and Incentive Strategy for Demand Response in Smart Grids," Energies, MDPI, vol. 16(3), pages 1-33, February.
    19. Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
    20. Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:321:y:2022:i:c:s0306261922007231. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.