IDEAS home Printed from https://ideas.repec.org/a/spr/joinma/v32y2021i3d10.1007_s10845-020-01612-y.html
   My bibliography  Save this article

Designing an adaptive production control system using reinforcement learning

Author

Listed:
  • Andreas Kuhnle

    (Institute of Production Science, Karlsruhe Institute of Technology (KIT))

  • Jan-Philipp Kaiser

    (Institute of Production Science, Karlsruhe Institute of Technology (KIT))

  • Felix Theiß

    (Institute of Production Science, Karlsruhe Institute of Technology (KIT))

  • Nicole Stricker

    (Institute of Production Science, Karlsruhe Institute of Technology (KIT))

  • Gisela Lanza

    (Institute of Production Science, Karlsruhe Institute of Technology (KIT))

Abstract

Modern production systems face enormous challenges due to rising customer requirements resulting in complex production systems. The operational efficiency in the competitive industry is ensured by an adequate production control system that manages all operations in order to optimize key performance indicators. Currently, control systems are mostly based on static and model-based heuristics, requiring significant human domain knowledge and, hence, do not match the dynamic environment of manufacturing companies. Data-driven reinforcement learning (RL) showed compelling results in applications such as board and computer games as well as first production applications. This paper addresses the design of RL to create an adaptive production control system by the real-world example of order dispatching in a complex job shop. As RL algorithms are “black box” approaches, they inherently prohibit a comprehensive understanding. Furthermore, the experience with advanced RL algorithms is still limited to single successful applications, which limits the transferability of results. In this paper, we examine the performance of the state, action, and reward function RL design. When analyzing the results, we identify robust RL designs. This makes RL an advantageous control system for highly dynamic and complex production systems, mainly when domain knowledge is limited.

Suggested Citation

  • Andreas Kuhnle & Jan-Philipp Kaiser & Felix Theiß & Nicole Stricker & Gisela Lanza, 2021. "Designing an adaptive production control system using reinforcement learning," Journal of Intelligent Manufacturing, Springer, vol. 32(3), pages 855-876, March.
  • Handle: RePEc:spr:joinma:v:32:y:2021:i:3:d:10.1007_s10845-020-01612-y
    DOI: 10.1007/s10845-020-01612-y
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10845-020-01612-y
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10845-020-01612-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Zeng, Qingcheng & Yang, Zhongzhen & Lai, Luyuan, 2009. "Models and algorithms for multi-crane oriented scheduling method in container terminals," Transport Policy, Elsevier, vol. 16(5), pages 271-278, September.
    2. Zhang, Zhicong & Zheng, Li & Hou, Forest & Li, Na, 2011. "Semiconductor final test scheduling with Sarsa([lambda], k) algorithm," European Journal of Operational Research, Elsevier, vol. 215(2), pages 446-458, December.
    3. Jens Heger & Jürgen Branke & Torsten Hildebrandt & Bernd Scholz-Reiter, 2016. "Dynamic adjustment of dispatching rule parameters in flow shops with sequence-dependent set-up times," International Journal of Production Research, Taylor & Francis Journals, vol. 54(22), pages 6812-6824, November.
    4. T Wauters & K Verbeeck & G Vanden Berghe & P De Causmaecker, 2011. "Learning agents for the multi-mode project scheduling problem," Journal of the Operational Research Society, Palgrave Macmillan;The OR Society, vol. 62(2), pages 281-290, February.
    5. Kfir Arviv & Helman Stern & Yael Edan, 2016. "Collaborative reinforcement learning for a two-robot job transfer flow-shop scheduling problem," International Journal of Production Research, Taylor & Francis Journals, vol. 54(4), pages 1196-1209, February.
    6. S. S. Panwalkar & Wafik Iskander, 1977. "A Survey of Scheduling Rules," Operations Research, INFORMS, vol. 25(1), pages 45-61, February.
    7. Xiao Wang & Hongwei Wang & Chao Qi, 2016. "Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system," Journal of Intelligent Manufacturing, Springer, vol. 27(2), pages 325-333, April.
    8. David Silver & Julian Schrittwieser & Karen Simonyan & Ioannis Antonoglou & Aja Huang & Arthur Guez & Thomas Hubert & Lucas Baker & Matthew Lai & Adrian Bolton & Yutian Chen & Timothy Lillicrap & Fan , 2017. "Mastering the game of Go without human knowledge," Nature, Nature, vol. 550(7676), pages 354-359, October.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Konstantinos S. Boulas & Georgios D. Dounias & Chrissoleon T. Papadopoulos, 2023. "A hybrid evolutionary algorithm approach for estimating the throughput of short reliable approximately balanced production lines," Journal of Intelligent Manufacturing, Springer, vol. 34(2), pages 823-852, February.
    2. Nan Ma & Hongqi Li & Hualin Liu, 2024. "State-Space Compression for Efficient Policy Learning in Crude Oil Scheduling," Mathematics, MDPI, vol. 12(3), pages 1-16, January.
    3. Ming Zhang & Yang Lu & Youxi Hu & Nasser Amaitik & Yuchun Xu, 2022. "Dynamic Scheduling Method for Job-Shop Manufacturing Systems by Deep Reinforcement Learning with Proximal Policy Optimization," Sustainability, MDPI, vol. 14(9), pages 1-16, April.
    4. Sebastian Mayer & Tobias Classen & Christian Endisch, 2021. "Modular production control using deep reinforcement learning: proximal policy optimization," Journal of Intelligent Manufacturing, Springer, vol. 32(8), pages 2335-2351, December.
    5. Marco Wurster & Marius Michel & Marvin Carl May & Andreas Kuhnle & Nicole Stricker & Gisela Lanza, 2022. "Modelling and condition-based control of a flexible and hybrid disassembly system with manual and autonomous workstations using reinforcement learning," Journal of Intelligent Manufacturing, Springer, vol. 33(2), pages 575-591, February.
    6. Hien Nguyen Ngoc & Ganix Lasa & Ion Iriarte, 2022. "Human-centred design in industry 4.0: case study review and opportunities for future research," Journal of Intelligent Manufacturing, Springer, vol. 33(1), pages 35-76, January.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Juan Pablo Usuga Cadavid & Samir Lamouri & Bernard Grabot & Robert Pellerin & Arnaud Fortin, 2020. "Machine learning applied in production planning and control: a state-of-the-art in the era of industry 4.0," Journal of Intelligent Manufacturing, Springer, vol. 31(6), pages 1531-1558, August.
    2. Behice Meltem Kayhan & Gokalp Yildiz, 2023. "Reinforcement learning applications to machine scheduling problems: a comprehensive literature review," Journal of Intelligent Manufacturing, Springer, vol. 34(3), pages 905-929, March.
    3. Drexl, Andreas & Kolisch, Rainer, 1991. "Produktionsplanung und -steuerung bei Einzel- und Kleinserienfertigung," Manuskripte aus den Instituten für Betriebswirtschaftslehre der Universität Kiel 281, Christian-Albrechts-Universität zu Kiel, Institut für Betriebswirtschaftslehre.
    4. Yuchen Zhang & Wei Yang, 2022. "Breakthrough invention and problem complexity: Evidence from a quasi‐experiment," Strategic Management Journal, Wiley Blackwell, vol. 43(12), pages 2510-2544, December.
    5. Jianxin Fang & Brenda Cheang & Andrew Lim, 2023. "Problems and Solution Methods of Machine Scheduling in Semiconductor Manufacturing Operations: A Survey," Sustainability, MDPI, vol. 15(17), pages 1-44, August.
    6. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    7. Anurag Agarwal & Varghese S. Jacob & Hasan Pirkul, 2006. "An Improved Augmented Neural-Network Approach for Scheduling Problems," INFORMS Journal on Computing, INFORMS, vol. 18(1), pages 119-128, February.
    8. Binzi Xu & Kai Xu & Baolin Fei & Dengchao Huang & Liang Tao & Yan Wang, 2024. "Automatic Design of Energy-Efficient Dispatching Rules for Multi-Objective Dynamic Flexible Job Shop Scheduling Based on Dual Feature Weight Sets," Mathematics, MDPI, vol. 12(10), pages 1-24, May.
    9. Parlakturk, Ali & Kumar, Sunil, 2004. "Self-Interested Routing in Queueing Networks," Research Papers 1782r, Stanford University, Graduate School of Business.
    10. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    11. Ostheimer, Julia & Chowdhury, Soumitra & Iqbal, Sarfraz, 2021. "An alliance of humans and machines for machine learning: Hybrid intelligent systems and their design principles," Technology in Society, Elsevier, vol. 66(C).
    12. Boute, Robert N. & Gijsbrechts, Joren & van Jaarsveld, Willem & Vanvuchelen, Nathalie, 2022. "Deep reinforcement learning for inventory control: A roadmap," European Journal of Operational Research, Elsevier, vol. 298(2), pages 401-412.
    13. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    14. Bierwirth, C. & Kuhpfahl, J., 2017. "Extended GRASP for the job shop scheduling problem with total weighted tardiness objective," European Journal of Operational Research, Elsevier, vol. 261(3), pages 835-848.
    15. Mobin, Mohammadsadegh & Li, Zhaojun & Cheraghi, S. Hossein & Wu, Gongyu, 2019. "An approach for design Verification and Validation planning and optimization for new product reliability improvement," Reliability Engineering and System Safety, Elsevier, vol. 190(C), pages 1-1.
    16. Fotuhi, Fateme & Huynh, Nathan & Vidal, Jose M. & Xie, Yuanchang, 2013. "Modeling yard crane operators as reinforcement learning agents," Research in Transportation Economics, Elsevier, vol. 42(1), pages 3-12.
    17. Zhou, Yuhao & Wang, Yanwei, 2022. "An integrated framework based on deep learning algorithm for optimizing thermochemical production in heavy oil reservoirs," Energy, Elsevier, vol. 253(C).
    18. Mandal, Ankit & Tiwari, Yash & Panigrahi, Prasanta K. & Pal, Mayukha, 2022. "Physics aware analytics for accurate state prediction of dynamical systems," Chaos, Solitons & Fractals, Elsevier, vol. 164(C).
    19. Adnan Jafar & Alessandra Kobayati & Michael A. Tsoukas & Ahmad Haidar, 2024. "Personalized insulin dosing using reinforcement learning for high-fat meals and aerobic exercises in type 1 diabetes: a proof-of-concept trial," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    20. Shun Jia & Yang Yang & Shuyu Li & Shang Wang & Anbang Li & Wei Cai & Yang Liu & Jian Hao & Luoke Hu, 2024. "The Green Flexible Job-Shop Scheduling Problem Considering Cost, Carbon Emissions, and Customer Satisfaction under Time-of-Use Electricity Pricing," Sustainability, MDPI, vol. 16(6), pages 1-22, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:joinma:v:32:y:2021:i:3:d:10.1007_s10845-020-01612-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.