A reinforcement and imitation learning method for pricing strategy of electricity retailer with customers’ flexibility
Author
Abstract
Suggested Citation
DOI: 10.1016/j.apenergy.2022.119543
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Nojavan, Sayyad & Zare, Kazem & Mohammadi-Ivatloo, Behnam, 2017. "Optimal stochastic energy management of retailer based on selling price determination under smart grid environment in the presence of demand response program," Applied Energy, Elsevier, vol. 187(C), pages 449-464.
- Feihu Hu & Xuan Feng & Hui Cao, 2018. "A Short-Term Decision Model for Electricity Retailers: Electricity Procurement and Time-of-Use Pricing," Energies, MDPI, vol. 11(12), pages 1-18, November.
- Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
- Saeian, Hosein & Niknam, Taher & Zare, Mohsen & Aghaei, Jamshid, 2022. "Coordinated optimal bidding strategies methods of aggregated microgrids: A game theory-based demand side management under an electricity market environment," Energy, Elsevier, vol. 245(C).
- Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
- Tuballa, Maria Lorena & Abundo, Michael Lochinvar, 2016. "A review of the development of Smart Grid technologies," Renewable and Sustainable Energy Reviews, Elsevier, vol. 59(C), pages 710-725.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Siying Xu & Gaoyu Zhang & Xianzhi Yuan, 2024. "An Enterprise Multi-agent Model with Game Q-Learning Based on a Single Decision Factor," Computational Economics, Springer;Society for Computational Economics, vol. 64(4), pages 2523-2562, October.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Razzak, Abdur & Islam, Md. Tariqul & Roy, Palash & Razzaque, Md. Abdur & Hassan, Md. Rafiul & Hassan, Mohammad Mehedi, 2024. "Leveraging Deep Q-Learning to maximize consumer quality of experience in smart grid," Energy, Elsevier, vol. 290(C).
- Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
- Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
- Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
- Mohseni, Soheil & Brent, Alan C. & Kelly, Scott & Browne, Will N., 2022. "Demand response-integrated investment and operational planning of renewable and sustainable energy systems considering forecast uncertainties: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 158(C).
- Park, Keonwoo & Moon, Ilkyeong, 2022. "Multi-agent deep reinforcement learning approach for EV charging scheduling in a smart grid," Applied Energy, Elsevier, vol. 328(C).
- Antonopoulos, Ioannis & Robu, Valentin & Couraud, Benoit & Kirli, Desen & Norbu, Sonam & Kiprakis, Aristides & Flynn, David & Elizondo-Gonzalez, Sergio & Wattam, Steve, 2020. "Artificial intelligence and machine learning approaches to energy demand-side response: A systematic review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 130(C).
- Deng, Tingting & Yan, Wenzhou & Nojavan, Sayyad & Jermsittiparsert, Kittisak, 2020. "Risk evaluation and retail electricity pricing using downside risk constraints method," Energy, Elsevier, vol. 192(C).
- Ma, Siyu & Liu, Hui & Wang, Ni & Huang, Lidong & Goh, Hui Hwang, 2023. "Incentive-based demand response under incomplete information based on the deep deterministic policy gradient," Applied Energy, Elsevier, vol. 351(C).
- Lu, Renzhi & Hong, Seung Ho & Zhang, Xiongfeng, 2018. "A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach," Applied Energy, Elsevier, vol. 220(C), pages 220-230.
- Dadashi, Mojtaba & Haghifam, Sara & Zare, Kazem & Haghifam, Mahmoud-Reza & Abapour, Mehdi, 2020. "Short-term scheduling of electricity retailers in the presence of Demand Response Aggregators: A two-stage stochastic Bi-Level programming approach," Energy, Elsevier, vol. 205(C).
- Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
- Han, Gwangwoo & Joo, Hong-Jin & Lim, Hee-Won & An, Young-Sub & Lee, Wang-Je & Lee, Kyoung-Ho, 2023. "Data-driven heat pump operation strategy using rainbow deep reinforcement learning for significant reduction of electricity cost," Energy, Elsevier, vol. 270(C).
- Perera, A.T.D. & Kamalaruban, Parameswaran, 2021. "Applications of reinforcement learning in energy systems," Renewable and Sustainable Energy Reviews, Elsevier, vol. 137(C).
- Ning Zhang & Nien-Che Yang & Jian-Hong Liu, 2021. "Optimal Time-of-Use Electricity Price for a Microgrid System Considering Profit of Power Company and Demand Users," Energies, MDPI, vol. 14(19), pages 1-13, October.
- Guo, Chenyu & Wang, Xin & Zheng, Yihui & Zhang, Feng, 2022. "Real-time optimal energy management of microgrid with uncertainties based on deep reinforcement learning," Energy, Elsevier, vol. 238(PC).
- Dinh, Huy Truong & Lee, Kyu-haeng & Kim, Daehee, 2022. "Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization," Applied Energy, Elsevier, vol. 321(C).
- Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
- Lu, Renzhi & Hong, Seung Ho, 2019. "Incentive-based demand response for smart grid with reinforcement learning and deep neural network," Applied Energy, Elsevier, vol. 236(C), pages 937-949.
- Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
More about this item
Keywords
Electricity market; Reinforcement learning; Imitation learning; Smart grid; Broker;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:appene:v:323:y:2022:i:c:s0306261922008571. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/description#description .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.