IDEAS home Printed from https://ideas.repec.org/a/taf/tprsxx/v61y2023i16p5772-5789.html
   My bibliography  Save this article

Reinforcement learning applied to production planning and control

Author

Listed:
  • Ana Esteso
  • David Peidro
  • Josefa Mula
  • Manuel Díaz-Madroñero

Abstract

The objective of this paper is to examine the use and applications of reinforcement learning (RL) techniques in the production planning and control (PPC) field addressing the following PPC areas: facility resource planning, capacity planning, purchase and supply management, production scheduling and inventory management. The main RL characteristics, such as method, context, states, actions, reward and highlights, were analysed. The considered number of agents, applications and RL software tools, specifically, programming language, platforms, application programming interfaces and RL frameworks, among others, were identified, and 181 articles were sreviewed. The results showed that RL was applied mainly to production scheduling problems, followed by purchase and supply management. The most revised RL algorithms were model-free and single-agent and were applied to simplified PPC environments. Nevertheless, their results seem to be promising compared to traditional mathematical programming and heuristics/metaheuristics solution methods, and even more so when they incorporate uncertainty or non-linear properties. Finally, RL value-based approaches are the most widely used, specifically Q-learning and its variants and for deep RL, deep Q-networks. In recent years however, the most widely used approach has been the actor-critic method, such as the advantage actor critic, proximal policy optimisation, deep deterministic policy gradient and trust region policy optimisation.

Suggested Citation

  • Ana Esteso & David Peidro & Josefa Mula & Manuel Díaz-Madroñero, 2023. "Reinforcement learning applied to production planning and control," International Journal of Production Research, Taylor & Francis Journals, vol. 61(16), pages 5772-5789, August.
  • Handle: RePEc:taf:tprsxx:v:61:y:2023:i:16:p:5772-5789
    DOI: 10.1080/00207543.2022.2104180
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/00207543.2022.2104180
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/00207543.2022.2104180?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Nan Ma & Hongqi Li & Hualin Liu, 2024. "State-Space Compression for Efficient Policy Learning in Crude Oil Scheduling," Mathematics, MDPI, vol. 12(3), pages 1-16, January.
    2. Li, Kunpeng & Liu, Tengbo & Ram Kumar, P.N. & Han, Xuefang, 2024. "A reinforcement learning-based hyper-heuristic for AGV task assignment and route planning in parts-to-picker warehouses," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 185(C).
    3. João Reis, 2023. "Exploring Applications and Practical Examples by Streamlining Material Requirements Planning (MRP) with Python," Logistics, MDPI, vol. 7(4), pages 1-19, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:tprsxx:v:61:y:2023:i:16:p:5772-5789. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/TPRS20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.