Author
Listed:
- Mansour, Shaza H.
- Azzam, Sarah M.
- Hasanien, Hany M.
- Tostado-Véliz, Marcos
- Alkuhayli, Abdulaziz
- Jurado, Francisco
Abstract
With the emergence of plug-in electric vehicles (PEVs) in smart grids (SGs) that helps in SG decarbonization, it has become crucial to harness these PEVs by optimizing their charging and discharging schedules in a smart home setting. However, uncertainties in arrival time, departure time, and state of charge (SOC) make scheduling tasks challenging. This paper proposes a two-stage approach whose objectives are minimizing both the PEV charging cost and the electricity bill of the smart home. In the first level, a deep reinforcement learning (DRL) method, a soft actor-critic (SAC)-based algorithm, is presented for a smart home PEV charging/discharging (C/D) scheduling under real-time pricing (RTP) tariff. In the second level, the obtained schedule is provided as an input to a home energy management system (HEMS) problem. The HEMS problem is formulated as a mixed integer linear programming (MILP) problem for scheduling home appliances and a battery energy storage system (BESS). SAC is compared to different reinforcement learning (RL) algorithms and disorderly PEV C/D for four samples from different seasons. The results show that SAC achieves the highest average rewards, lowest charging cost, and reaches the required SOC upon departure. Compared with other RL algorithms, SAC can achieve a PEV charging cost saving of up to 51.45 % during the summer season. It is also illustrated that SAC causes a huge cost reduction compared to the disorderly C/D schedule. The HEMS appliances and BESS schedules with the SAC-scheduled PEV are shown for four samples. These schedules are compared with the HEMS schedules with disorderly PEV scheduling, and without PEVs. HEMS schedules with SAC-scheduled PEVs can save costs up to 83.29 %, and 15.69 % compared with HEMS with disorderly PEV scheduling, and to the HEMS schedule without PEVs, respectively.
Suggested Citation
Mansour, Shaza H. & Azzam, Sarah M. & Hasanien, Hany M. & Tostado-Véliz, Marcos & Alkuhayli, Abdulaziz & Jurado, Francisco, 2025.
"Deep reinforcement learning-based plug-in electric vehicle charging/discharging scheduling in a home energy management system,"
Energy, Elsevier, vol. 316(C).
Handle:
RePEc:eee:energy:v:316:y:2025:i:c:s0360544225000623
DOI: 10.1016/j.energy.2025.134420
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:316:y:2025:i:c:s0360544225000623. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.