IDEAS home Printed from https://ideas.repec.org/a/spr/joinma/v33y2022i4d10.1007_s10845-022-01915-2.html
   My bibliography  Save this article

On reliability of reinforcement learning based production scheduling systems: a comparative survey

Author

Listed:
  • Constantin Waubert de Puiseau

    (University of Wuppertal)

  • Richard Meyes

    (University of Wuppertal)

  • Tobias Meisen

    (University of Wuppertal)

Abstract

The deep reinforcement learning (DRL) community has published remarkable results on complex strategic planning problems, most famously in virtual scenarios for board and video games. However, the application to real-world scenarios such as production scheduling (PS) problems remains a challenge for current research. This is because real-world application fields typically show specific requirement profiles that are often not considered by state-of-the-art DRL research. This survey addresses questions raised in the domain of industrial engineering regarding the reliability of production schedules obtained through DRL-based scheduling approaches. We review definitions and evaluation measures of reliability both, in the classical numerical optimization domain with focus on PS problems and more broadly in the DRL domain. Furthermore, we define common ground and terminology and present a collection of quantifiable reliability definitions for use in this interdisciplinary domain. Concludingly, we identify promising directions of current DRL research as a basis for tackling different aspects of reliability in PS applications in the future.

Suggested Citation

  • Constantin Waubert de Puiseau & Richard Meyes & Tobias Meisen, 2022. "On reliability of reinforcement learning based production scheduling systems: a comparative survey," Journal of Intelligent Manufacturing, Springer, vol. 33(4), pages 911-927, April.
  • Handle: RePEc:spr:joinma:v:33:y:2022:i:4:d:10.1007_s10845-022-01915-2
    DOI: 10.1007/s10845-022-01915-2
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10845-022-01915-2
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10845-022-01915-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Sotskov, Y. & Sotskova, N. Y. & Werner, F., 1997. "Stability of an optimal schedule in a job shop," Omega, Elsevier, vol. 25(4), pages 397-414, August.
    2. Allahverdi, Ali, 2016. "A survey of scheduling problems with no-wait in process," European Journal of Operational Research, Elsevier, vol. 255(3), pages 665-686.
    3. Oriol Vinyals & Igor Babuschkin & Wojciech M. Czarnecki & Michaël Mathieu & Andrew Dudzik & Junyoung Chung & David H. Choi & Richard Powell & Timo Ewalds & Petko Georgiev & Junhyuk Oh & Dan Horgan & M, 2019. "Grandmaster level in StarCraft II using multi-agent reinforcement learning," Nature, Nature, vol. 575(7782), pages 350-354, November.
    4. James C. Bean & John R. Birge & John Mittenthal & Charles E. Noon, 1991. "Matchup Scheduling with Multiple Resources, Release Dates and Disruptions," Operations Research, INFORMS, vol. 39(3), pages 470-483, June.
    5. Nicole Bäuerle & Jonathan Ott, 2011. "Markov Decision Processes with Average-Value-at-Risk criteria," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 74(3), pages 361-379, December.
    6. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    7. Nicole Bäuerle & Ulrich Rieder, 2014. "More Risk-Sensitive Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 39(1), pages 105-120, February.
    8. Marc G. Bellemare & Salvatore Candido & Pablo Samuel Castro & Jun Gong & Marlos C. Machado & Subhodeep Moitra & Sameera S. Ponda & Ziyu Wang, 2020. "Autonomous navigation of stratospheric balloons using reinforcement learning," Nature, Nature, vol. 588(7836), pages 77-82, December.
    9. Al-Hinai, Nasr & ElMekkawy, T.Y., 2011. "Robust and stable flexible job shop scheduling with random machine breakdowns using a hybrid genetic algorithm," International Journal of Production Economics, Elsevier, vol. 132(2), pages 279-291, August.
    10. Kfir Arviv & Helman Stern & Yael Edan, 2016. "Collaborative reinforcement learning for a two-robot job transfer flow-shop scheduling problem," International Journal of Production Research, Taylor & Francis Journals, vol. 54(4), pages 1196-1209, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Tala Talaei Khoei & Naima Kaabouch, 2023. "Machine Learning: Models, Challenges, and Research Directions," Future Internet, MDPI, vol. 15(10), pages 1-29, October.
    2. Moiz Ahmad & Muhammad Babar Ramzan & Muhammad Omair & Muhammad Salman Habib, 2024. "Integrating Risk-Averse and Constrained Reinforcement Learning for Robust Decision-Making in High-Stakes Scenarios," Mathematics, MDPI, vol. 12(13), pages 1-32, June.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Nicole Bauerle & Alexander Glauner, 2020. "Distributionally Robust Markov Decision Processes and their Connection to Risk Measures," Papers 2007.13103, arXiv.org.
    2. Jinming Xu & Yuan Lin, 2024. "Energy Management for Hybrid Electric Vehicles Using Safe Hybrid-Action Reinforcement Learning," Mathematics, MDPI, vol. 12(5), pages 1-20, February.
    3. Malte Reinschmidt & József Fortágh & Andreas Günther & Valentin V. Volchkov, 2024. "Reinforcement learning in cold atom experiments," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    4. Nicole Bäuerle & Alexander Glauner, 2021. "Minimizing spectral risk measures applied to Markov decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 94(1), pages 35-69, August.
    5. Nicole Bauerle & Alexander Glauner, 2020. "Minimizing Spectral Risk Measures Applied to Markov Decision Processes," Papers 2012.04521, arXiv.org.
    6. Qingda Wei & Xian Chen, 2023. "Continuous-Time Markov Decision Processes Under the Risk-Sensitive First Passage Discounted Cost Criterion," Journal of Optimization Theory and Applications, Springer, vol. 197(1), pages 309-333, April.
    7. Yi, Zonggen & Luo, Yusheng & Westover, Tyler & Katikaneni, Sravya & Ponkiya, Binaka & Sah, Suba & Mahmud, Sadab & Raker, David & Javaid, Ahmad & Heben, Michael J. & Khanna, Raghav, 2022. "Deep reinforcement learning based optimization for a tightly coupled nuclear renewable integrated energy system," Applied Energy, Elsevier, vol. 328(C).
    8. O. L. V. Costa & F. Dufour, 2021. "Integro-differential optimality equations for the risk-sensitive control of piecewise deterministic Markov processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 93(2), pages 327-357, April.
    9. S. David Wu & Eui-Seok Byeon & Robert H. Storer, 1999. "A Graph-Theoretic Decomposition of the Job Shop Scheduling Problem to Achieve Scheduling Robustness," Operations Research, INFORMS, vol. 47(1), pages 113-124, February.
    10. Shichang Xiao & Zigao Wu & Hongyan Dui, 2022. "Resilience-Based Surrogate Robustness Measure and Optimization Method for Robust Job-Shop Scheduling," Mathematics, MDPI, vol. 10(21), pages 1-22, October.
    11. Meloni, Carlo & Pranzo, Marco & Samà, Marcella, 2022. "Evaluation of VaR and CVaR for the makespan in interval valued blocking job shops," International Journal of Production Economics, Elsevier, vol. 247(C).
    12. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust decision-making under risk and ambiguity," Papers 2104.12573, arXiv.org, revised Oct 2021.
    13. Jose L. Andrade-Pineda & David Canca & Pedro L. Gonzalez-R & M. Calle, 2020. "Scheduling a dual-resource flexible job shop with makespan and due date-related criteria," Annals of Operations Research, Springer, vol. 291(1), pages 5-35, August.
    14. Nicholas G. Hall & Marc E. Posner & Chris N. Potts, 2021. "Online production planning to maximize the number of on-time orders," Annals of Operations Research, Springer, vol. 298(1), pages 249-269, March.
    15. Liying Xu & Jiadi Zhu & Bing Chen & Zhen Yang & Keqin Liu & Bingjie Dang & Teng Zhang & Yuchao Yang & Ru Huang, 2022. "A distributed nanocluster based multi-agent evolutionary network," Nature Communications, Nature, vol. 13(1), pages 1-10, December.
    16. Daphne Cornelisse & Thomas Rood & Mateusz Malinowski & Yoram Bachrach & Tal Kachman, 2022. "Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members," Papers 2208.08798, arXiv.org.
    17. Bhabak, Arnab & Saha, Subhamay, 2022. "Risk-sensitive semi-Markov decision problems with discounted cost and general utilities," Statistics & Probability Letters, Elsevier, vol. 184(C).
    18. Shun Jia & Yang Yang & Shuyu Li & Shang Wang & Anbang Li & Wei Cai & Yang Liu & Jian Hao & Luoke Hu, 2024. "The Green Flexible Job-Shop Scheduling Problem Considering Cost, Carbon Emissions, and Customer Satisfaction under Time-of-Use Electricity Pricing," Sustainability, MDPI, vol. 16(6), pages 1-22, March.
    19. Selcuk Goren & Ihsan Sabuncuoglu & Utku Koc, 2012. "Optimization of schedule stability and efficiency under processing time variability and random machine breakdowns in a job shop environment," Naval Research Logistics (NRL), John Wiley & Sons, vol. 59(1), pages 26-38, February.
    20. Weisheng Chiu & Thomas Chun Man Fan & Sang-Back Nam & Ping-Hung Sun, 2021. "Knowledge Mapping and Sustainable Development of eSports Research: A Bibliometric and Visualized Analysis," Sustainability, MDPI, vol. 13(18), pages 1-17, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:joinma:v:33:y:2022:i:4:d:10.1007_s10845-022-01915-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.