IDEAS home Printed from https://ideas.repec.org/a/eee/energy/v274y2023ics0360544223006060.html
   My bibliography  Save this article

Integration of design and control for renewable energy systems with an application to anaerobic digestion: A deep deterministic policy gradient framework

Author

Listed:
  • Mendiola-Rodriguez, Tannia A.
  • Ricardez-Sandoval, Luis A.

Abstract

In recent years, the urgent need to develop sustainable processes to curb the effects of climate change has gained global attention and led to the transition into green technologies, such as Anaerobic Digestion Systems (AD). As these technologies present a complex dynamic behavior, there is a motivation to seek for new ways to optimize these systems. This study presents a Deep Deterministic Policy Gradient (DDPG) strategy for integration of process design and control. DDPG is a state-of-the-art reinforcement learning algorithm used to search for optimal solutions. The proposed approach considers stochastic disturbances and parametric uncertainty. Also, a penalty function included in the reward function is considered to account for process constraints. The proposed approach was tested in AD systems involving Tequila vinasses. Two reactor AD configurations were explored under multiple scenarios using the proposed DDPG strategy. While the two-stage AD system required a larger capital investment in exchange of higher amounts of biogas being produced, the single-stage AD system required less investment in capital costs in exchange of producing less biogas and therefore lower revenues than the two-stage system. The results showed that DDPG was able to identify optimal design and control profiles thus making it an attractive method for optimal process design and operations management of renewable systems.

Suggested Citation

  • Mendiola-Rodriguez, Tannia A. & Ricardez-Sandoval, Luis A., 2023. "Integration of design and control for renewable energy systems with an application to anaerobic digestion: A deep deterministic policy gradient framework," Energy, Elsevier, vol. 274(C).
  • Handle: RePEc:eee:energy:v:274:y:2023:i:c:s0360544223006060
    DOI: 10.1016/j.energy.2023.127212
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0360544223006060
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.energy.2023.127212?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Hua, Haochen & Qin, Yuchao & Hao, Chuantong & Cao, Junwei, 2019. "Optimal energy management strategies for energy Internet via deep reinforcement learning approach," Applied Energy, Elsevier, vol. 239(C), pages 598-609.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhu, Jiaoyiling & Hu, Weihao & Xu, Xiao & Liu, Haoming & Pan, Li & Fan, Haoyang & Zhang, Zhenyuan & Chen, Zhe, 2022. "Optimal scheduling of a wind energy dominated distribution network via a deep reinforcement learning approach," Renewable Energy, Elsevier, vol. 201(P1), pages 792-801.
    2. Zhang, Yijie & Ma, Tao & Elia Campana, Pietro & Yamaguchi, Yohei & Dai, Yanjun, 2020. "A techno-economic sizing method for grid-connected household photovoltaic battery systems," Applied Energy, Elsevier, vol. 269(C).
    3. Ahmad, Tanveer & Chen, Huanxin, 2019. "Deep learning for multi-scale smart energy forecasting," Energy, Elsevier, vol. 175(C), pages 98-112.
    4. Zeyue Sun & Mohsen Eskandari & Chaoran Zheng & Ming Li, 2022. "Handling Computation Hardness and Time Complexity Issue of Battery Energy Storage Scheduling in Microgrids by Deep Reinforcement Learning," Energies, MDPI, vol. 16(1), pages 1-20, December.
    5. Kandasamy, Jeevitha & Ramachandran, Rajeswari & Veerasamy, Veerapandiyan & Irudayaraj, Andrew Xavier Raj, 2024. "Distributed leader-follower based adaptive consensus control for networked microgrids," Applied Energy, Elsevier, vol. 353(PA).
    6. Fathy, Ahmed, 2023. "Bald eagle search optimizer-based energy management strategy for microgrid with renewable sources and electric vehicles," Applied Energy, Elsevier, vol. 334(C).
    7. Wu, Yuankai & Tan, Huachun & Peng, Jiankun & Zhang, Hailong & He, Hongwen, 2019. "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, Elsevier, vol. 247(C), pages 454-466.
    8. Zhu, Ziqing & Hu, Ze & Chan, Ka Wing & Bu, Siqi & Zhou, Bin & Xia, Shiwei, 2023. "Reinforcement learning in deregulated energy market: A comprehensive review," Applied Energy, Elsevier, vol. 329(C).
    9. Qi, Chunyang & Zhu, Yiwen & Song, Chuanxue & Yan, Guangfu & Xiao, Feng & Da wang, & Zhang, Xu & Cao, Jingwei & Song, Shixin, 2022. "Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle," Energy, Elsevier, vol. 238(PA).
    10. Akhil Joseph & Patil Balachandra, 2020. "Energy Internet, the Future Electricity System: Overview, Concept, Model Structure, and Mechanism," Energies, MDPI, vol. 13(16), pages 1-26, August.
    11. Seongwoo Lee & Joonho Seon & Byungsun Hwang & Soohyun Kim & Youngghyu Sun & Jinyoung Kim, 2024. "Recent Trends and Issues of Energy Management Systems Using Machine Learning," Energies, MDPI, vol. 17(3), pages 1-24, January.
    12. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    13. Zhao, Liyuan & Yang, Ting & Li, Wei & Zomaya, Albert Y., 2022. "Deep reinforcement learning-based joint load scheduling for household multi-energy system," Applied Energy, Elsevier, vol. 324(C).
    14. Samuel-Soma M. Ajibade & Festus Victor Bekun & Festus Fatai Adedoyin & Bright Akwasi Gyamfi & Anthonia Oluwatosin Adediran, 2023. "Machine Learning Applications in Renewable Energy (MLARE) Research: A Publication Trend and Bibliometric Analysis Study (2012–2021)," Clean Technol., MDPI, vol. 5(2), pages 1-21, April.
    15. Qi, Yunying & Xu, Xiao & Liu, Youbo & Pan, Li & Liu, Junyong & Hu, Weihao, 2024. "Intelligent energy management for an on-grid hydrogen refueling station based on dueling double deep Q network algorithm with NoisyNet," Renewable Energy, Elsevier, vol. 222(C).
    16. Ma, Tao & Zhang, Yijie & Gu, Wenbo & Xiao, Gang & Yang, Hongxing & Wang, Shuxiao, 2022. "Strategy comparison and techno-economic evaluation of a grid-connected photovoltaic-battery system," Renewable Energy, Elsevier, vol. 197(C), pages 1049-1060.
    17. Yi Kuang & Xiuli Wang & Hongyang Zhao & Yijun Huang & Xianlong Chen & Xifan Wang, 2020. "Agent-Based Energy Sharing Mechanism Using Deep Deterministic Policy Gradient Algorithm," Energies, MDPI, vol. 13(19), pages 1-20, September.
    18. Yang, Ting & Zhao, Liyuan & Li, Wei & Wu, Jianzhong & Zomaya, Albert Y., 2021. "Towards healthy and cost-effective indoor environment management in smart homes: A deep reinforcement learning approach," Applied Energy, Elsevier, vol. 300(C).
    19. Sajjad Miran & Muhammad Tamoor & Tayybah Kiren & Faakhar Raza & Muhammad Imtiaz Hussain & Jun-Tae Kim, 2022. "Optimization of Standalone Photovoltaic Drip Irrigation System: A Simulation Study," Sustainability, MDPI, vol. 14(14), pages 1-20, July.
    20. Kong, Xiangyu & Kong, Deqian & Yao, Jingtao & Bai, Linquan & Xiao, Jie, 2020. "Online pricing of demand response based on long short-term memory and reinforcement learning," Applied Energy, Elsevier, vol. 271(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:energy:v:274:y:2023:i:c:s0360544223006060. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/energy .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.