IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v149y2018icp11-23.html
   My bibliography  Save this article

Relaxed deep learning for real-time economic generation dispatch and control with unified time scale

Author

Listed:
  • Yin, Linfei
  • Yu, Tao
  • Zhang, Xiaoshun
  • Yang, Bo

Abstract

To solve the collaboration problem of multi-time scale economic dispatch and generation control in power system, i.e., long term time scale optimization, short term time scale optimization, and real-time control, a real-time economic generation dispatch and control (REG) framework, which as a unified time scale framework, is designed in this paper. With a relaxed operator employed into deep neural network (DNN), relaxed deep learning (RDL) is proposed for the REG framework, which towards an alternative to conventional generation control framework, which combines unit commitment (UC), economic dispatch (ED), automatic generation control (AGC), and generation command dispatch (GCD). Compared with 1200 combined conventional generation control algorithms in two simulations, i.e., IEEE 10-generator 39-bus New-England power system and Hainan 8-generator power grid (China), RDL obtains the optimal control performance with smaller frequency deviation, smaller area control error, smaller total cost, and smaller number of reverse regulation. Although the RDL needs a relative long computation time in the pre-training, the simulation results verify the effectiveness and feasibility of the proposed RDL for REG framework.

Suggested Citation

  • Yin, Linfei & Yu, Tao & Zhang, Xiaoshun & Yang, Bo, 2018. "Relaxed deep learning for real-time economic generation dispatch and control with unified time scale," Energy, Elsevier, vol. 149(C), pages 11-23.
  • Handle: RePEc:eee:energy:v:149:y:2018:i:c:p:11-23
    DOI: 10.1016/j.energy.2018.01.165
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544218301932
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2018.01.165?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Liang, Zhengtang & Liang, Jun & Zhang, Li & Wang, Chengfu & Yun, Zhihao & Zhang, Xu, 2015. "Analysis of multi-scale chaotic characteristics of wind power based on Hilbert–Huang transform and Hurst analysis," Applied Energy, Elsevier, vol. 159(C), pages 51-61.
    2. Xi, Lei & Yu, Tao & Yang, Bo & Zhang, Xiaoshun & Qiu, Xuanyu, 2016. "A wolf pack hunting strategy based virtual tribes control for automatic generation control of smart grid," Applied Energy, Elsevier, vol. 178(C), pages 198-211.
    3. Kia, Mohsen & Nazar, Mehrdad Setayesh & Sepasian, Mohammad Sadegh & Heidari, Alireza & Siano, Pierluigi, 2017. "Optimal day ahead scheduling of combined heat and power units with electrical and thermal storage considering security constraint of power system," Energy, Elsevier, vol. 120(C), pages 241-252.
    4. Zhang, Xiaoshun & Yu, Tao & Yang, Bo & Li, Li, 2016. "Virtual generation tribe based robust collaborative consensus algorithm for dynamic generation command dispatch optimization of smart grid," Energy, Elsevier, vol. 101(C), pages 34-51.
    5. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    6. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yin, Linfei & Zhang, Bin, 2021. "Time series generative adversarial network controller for long-term smart generation control of microgrids," Applied Energy, Elsevier, vol. 281(C).
    2. Yin, Linfei & Zhang, Bin, 2023. "Relaxed deep generative adversarial networks for real-time economic smart generation dispatch and control of integrated energy systems," Applied Energy, Elsevier, vol. 330(PA).
    3. Yin, Linfei & Zhao, Lulin, 2021. "Rejectable deep differential dynamic programming for real-time integrated generation dispatch and control of micro-grids," Energy, Elsevier, vol. 225(C).
    4. Yin, Linfei & Gao, Qi & Zhao, Lulin & Wang, Tao, 2020. "Expandable deep learning for real-time economic generation dispatch and control of three-state energies based future smart grids," Energy, Elsevier, vol. 191(C).
    5. Shin, Hansol & Kim, Tae Hyun & Kim, Hyoungtae & Lee, Sungwoo & Kim, Wook, 2019. "Environmental shutdown of coal-fired generators for greenhouse gas reduction: A case study of South Korea," Applied Energy, Elsevier, vol. 252(C), pages 1-1.
    6. Rodríguez, Fermín & Galarza, Ainhoa & Vasquez, Juan C. & Guerrero, Josep M., 2022. "Using deep learning and meteorological parameters to forecast the photovoltaic generators intra-hour output power interval for smart grid control," Energy, Elsevier, vol. 239(PB).
    7. Yin, Linfei & Luo, Shikui & Ma, Chenxiao, 2021. "Expandable depth and width adaptive dynamic programming for economic smart generation control of smart grids," Energy, Elsevier, vol. 232(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    2. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.
    3. Taejong Joo & Hyunyoung Jun & Dongmin Shin, 2022. "Task Allocation in Human–Machine Manufacturing Systems Using Deep Reinforcement Learning," Sustainability, MDPI, vol. 14(4), pages 1-18, February.
    4. Oleh Lukianykhin & Tetiana Bogodorova, 2021. "Voltage Control-Based Ancillary Service Using Deep Reinforcement Learning," Energies, MDPI, vol. 14(8), pages 1-22, April.
    5. Chen, Jiaxin & Shu, Hong & Tang, Xiaolin & Liu, Teng & Wang, Weida, 2022. "Deep reinforcement learning-based multi-objective control of hybrid power system combined with road recognition under time-varying environment," Energy, Elsevier, vol. 239(PC).
    6. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    7. Zhang, Yihao & Chai, Zhaojie & Lykotrafitis, George, 2021. "Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 571(C).
    8. Yifeng Guo & Xingyu Fu & Yuyan Shi & Mingwen Liu, 2018. "Robust Log-Optimal Strategy with Reinforcement Learning," Papers 1805.00205, arXiv.org.
    9. Hamed Khalili, 2024. "Deep Learning Pricing of Processing Firms in Agricultural Markets," Agriculture, MDPI, vol. 14(5), pages 1-14, April.
    10. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    11. Chengmin Zhou & Bingding Huang & Pasi Fränti, 2022. "A review of motion planning algorithms for intelligent robots," Journal of Intelligent Manufacturing, Springer, vol. 33(2), pages 387-424, February.
    12. Justin P. Johnson & Andrew Rhodes & Matthijs Wildenbeest, 2023. "Platform Design When Sellers Use Pricing Algorithms," Econometrica, Econometric Society, vol. 91(5), pages 1841-1879, September.
    13. Yingfei Wang & Inbal Yahav & Balaji Padmanabhan, 2024. "Smart Testing with Vaccination: A Bandit Algorithm for Active Sampling for Managing COVID-19," Information Systems Research, INFORMS, vol. 35(1), pages 120-144, March.
    14. Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
    15. Esmaeili Aliabadi, Danial & Chan, Katrina, 2022. "The emerging threat of artificial intelligence on competition in liberalized electricity markets: A deep Q-network approach," Applied Energy, Elsevier, vol. 325(C).
    16. Li, Wenqing & Ni, Shaoquan, 2022. "Train timetabling with the general learning environment and multi-agent deep reinforcement learning," Transportation Research Part B: Methodological, Elsevier, vol. 157(C), pages 230-251.
    17. Bo Hu & Jiaxi Li & Shuang Li & Jie Yang, 2019. "A Hybrid End-to-End Control Strategy Combining Dueling Deep Q-network and PID for Transient Boost Control of a Diesel Engine with Variable Geometry Turbocharger and Cooled EGR," Energies, MDPI, vol. 12(19), pages 1-15, September.
    18. Hao, Peng & Wei, Zhensong & Bai, Zhengwei & Barth, Matthew J., 2020. "Developing an Adaptive Strategy for Connected Eco-Driving Under Uncertain Traffic and Signal Conditions," Institute of Transportation Studies, Working Paper Series qt2fv5063b, Institute of Transportation Studies, UC Davis.
    19. Tambet Matiisen & Aqeel Labash & Daniel Majoral & Jaan Aru & Raul Vicente, 2022. "Do Deep Reinforcement Learning Agents Model Intentions?," Stats, MDPI, vol. 6(1), pages 1-17, December.
    20. Biemann, Marco & Scheller, Fabian & Liu, Xiufeng & Huang, Lizhen, 2021. "Experimental evaluation of model-free reinforcement learning algorithms for continuous HVAC control," Applied Energy, Elsevier, vol. 298(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:149:y:2018:i:c:p:11-23. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.