Author
Listed:
- Philipp Wohlgenannt
(Josef Ressel Centre for Intelligent Thermal Energy Systems, Illwerke vkw Endowed Professorship for Energy Efficiency, Energy Research Centre, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria
Faculty of Engineering and Science, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway)
- Sebastian Hegenbart
(Department of Engineering and Technology, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria
Digital Factory Vorarlberg GmbH, Hochschulstrasse 1, 6850 Dornbirn, Austria)
- Elias Eder
(Josef Ressel Centre for Intelligent Thermal Energy Systems, Illwerke vkw Endowed Professorship for Energy Efficiency, Energy Research Centre, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria)
- Mohan Kolhe
(Faculty of Engineering and Science, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway)
- Peter Kepplinger
(Josef Ressel Centre for Intelligent Thermal Energy Systems, Illwerke vkw Endowed Professorship for Energy Efficiency, Energy Research Centre, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria)
Abstract
The food industry faces significant challenges in managing operational costs due to its high energy intensity and rising energy prices. Industrial food-processing facilities, with substantial thermal capacities and large demands for cooling and heating, offer promising opportunities for demand response (DR) strategies. This study explores the application of deep reinforcement learning (RL) as an innovative, data-driven approach for DR in the food industry. By leveraging the adaptive, self-learning capabilities of RL, energy costs in the investigated plant are effectively decreased. The RL algorithm was compared with the well-established optimization method Mixed Integer Linear Programming (MILP), and both were benchmarked against a reference scenario without DR. The two optimization strategies demonstrate cost savings of 17.57% and 18.65% for RL and MILP, respectively. Although RL is slightly less efficient in cost reduction, it significantly outperforms in computational speed, being approximately 20 times faster. During operation, RL only needs 2ms per optimization compared to 19s for MILP, making it a promising optimization tool for edge computing. Moreover, while MILP’s computation time increases considerably with the number of binary variables, RL efficiently learns dynamic system behavior and scales to more complex systems without significant performance degradation. These results highlight that deep RL, when applied to DR, offers substantial cost savings and computational efficiency, with broad applicability to energy management in various applications.
Suggested Citation
Philipp Wohlgenannt & Sebastian Hegenbart & Elias Eder & Mohan Kolhe & Peter Kepplinger, 2024.
"Energy Demand Response in a Food-Processing Plant: A Deep Reinforcement Learning Approach,"
Energies, MDPI, vol. 17(24), pages 1-19, December.
Handle:
RePEc:gam:jeners:v:17:y:2024:i:24:p:6430-:d:1548622
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:24:p:6430-:d:1548622. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.