IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v16y2024i8p3234-d1374658.html
   My bibliography  Save this article

Multi-Agent Reinforcement Learning for Job Shop Scheduling in Dynamic Environments

Author

Listed:
  • Yu Pu

    (School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China)

  • Fang Li

    (School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China)

  • Shahin Rahimifard

    (School of Mechanical, Electrical and Manufacturing Engineering, Loughborough University, Loughborough LE11 3TU, UK)

Abstract

In response to the challenges of dynamic adaptability, real-time interactivity, and dynamic optimization posed by the application of existing deep reinforcement learning algorithms in solving complex scheduling problems, this study proposes a novel approach using graph neural networks and deep reinforcement learning to complete the task of job shop scheduling. A distributed multi-agent scheduling architecture (DMASA) is constructed to maximize global rewards, modeling the intelligent manufacturing job shop scheduling problem as a sequential decision problem represented by graphs and using a Graph Embedding–Heterogeneous Graph Neural Network (GE-HetGNN) to encode state nodes and map them to the optimal scheduling strategy, including machine matching and process selection strategies. Finally, an actor–critic architecture-based multi-agent proximal policy optimization algorithm is employed to train the network and optimize the decision-making process. Experimental results demonstrate that the proposed framework exhibits generalizability, outperforms commonly used scheduling rules and RL-based scheduling methods on benchmarks, shows better stability than single-agent scheduling architectures, and breaks through the instance-size constraint, making it suitable for large-scale problems. We verified the feasibility of our proposed method in a specific experimental environment. The experimental results demonstrate that our research can achieve formal modeling and mapping with specific physical processing workshops, which aligns more closely with real-world green scheduling issues and makes it easier for subsequent researchers to integrate algorithms with actual environments.

Suggested Citation

  • Yu Pu & Fang Li & Shahin Rahimifard, 2024. "Multi-Agent Reinforcement Learning for Job Shop Scheduling in Dynamic Environments," Sustainability, MDPI, vol. 16(8), pages 1-26, April.
  • Handle: RePEc:gam:jsusta:v:16:y:2024:i:8:p:3234-:d:1374658
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/16/8/3234/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/16/8/3234/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Lu, Renzhi & Li, Yi-Chang & Li, Yuting & Jiang, Junhui & Ding, Yuemin, 2020. "Multi-agent deep reinforcement learning based demand response for discrete manufacturing systems energy management," Applied Energy, Elsevier, vol. 276(C).
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    3. Jain, A. S. & Meeran, S., 1999. "Deterministic job-shop scheduling: Past, present and future," European Journal of Operational Research, Elsevier, vol. 113(2), pages 390-434, March.
    4. Adil Baykasoğlu & Fatma S. Karaslan, 2017. "Solving comprehensive dynamic job shop scheduling problem by using a GRASP-based approach," International Journal of Production Research, Taylor & Francis Journals, vol. 55(11), pages 3308-3325, June.
    5. Kamran Kianfar & Arezoo Atighehchian, 2023. "A hybrid heuristic approach to master surgery scheduling with downstream resource constraints and dividable operating room blocks," Annals of Operations Research, Springer, vol. 328(1), pages 727-754, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Omar Al-Ani & Sanjoy Das, 2022. "Reinforcement Learning: Theory and Applications in HEMS," Energies, MDPI, vol. 15(17), pages 1-37, September.
    2. Qiu, Dawei & Ye, Yujian & Papadaskalopoulos, Dimitrios & Strbac, Goran, 2021. "Scalable coordinated management of peer-to-peer energy trading: A multi-cluster deep reinforcement learning approach," Applied Energy, Elsevier, vol. 292(C).
    3. Xie, Jiahan & Ajagekar, Akshay & You, Fengqi, 2023. "Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings," Applied Energy, Elsevier, vol. 342(C).
    4. Dinh, Huy Truong & Lee, Kyu-haeng & Kim, Daehee, 2022. "Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization," Applied Energy, Elsevier, vol. 321(C).
    5. Ali Fırat İnal & Çağrı Sel & Adnan Aktepe & Ahmet Kürşad Türker & Süleyman Ersöz, 2023. "A Multi-Agent Reinforcement Learning Approach to the Dynamic Job Shop Scheduling Problem," Sustainability, MDPI, vol. 15(10), pages 1-24, May.
    6. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.
    7. Wolosewicz, Cathy & Dauzère-Pérès, Stéphane & Aggoune, Riad, 2015. "A Lagrangian heuristic for an integrated lot-sizing and fixed scheduling problem," European Journal of Operational Research, Elsevier, vol. 244(1), pages 3-12.
    8. Mahmoud Mahfouz & Angelos Filos & Cyrine Chtourou & Joshua Lockhart & Samuel Assefa & Manuela Veloso & Danilo Mandic & Tucker Balch, 2019. "On the Importance of Opponent Modeling in Auction Markets," Papers 1911.12816, arXiv.org.
    9. Imen Azzouz & Wiem Fekih Hassen, 2023. "Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach," Energies, MDPI, vol. 16(24), pages 1-18, December.
    10. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    11. Sun, Alexander Y., 2020. "Optimal carbon storage reservoir management through deep reinforcement learning," Applied Energy, Elsevier, vol. 278(C).
    12. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    13. Tao Ren & Yan Zhang & Shuenn-Ren Cheng & Chin-Chia Wu & Meng Zhang & Bo-yu Chang & Xin-yue Wang & Peng Zhao, 2020. "Effective Heuristic Algorithms Solving the Jobshop Scheduling Problem with Release Dates," Mathematics, MDPI, vol. 8(8), pages 1-25, July.
    14. Woo Jae Byun & Bumkyu Choi & Seongmin Kim & Joohyun Jo, 2023. "Practical Application of Deep Reinforcement Learning to Optimal Trade Execution," FinTech, MDPI, vol. 2(3), pages 1-16, June.
    15. Lu, Yu & Xiang, Yue & Huang, Yuan & Yu, Bin & Weng, Liguo & Liu, Junyong, 2023. "Deep reinforcement learning based optimal scheduling of active distribution system considering distributed generation, energy storage and flexible load," Energy, Elsevier, vol. 271(C).
    16. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    17. Rossi, Andrea, 2014. "Flexible job shop scheduling with sequence-dependent setup and transportation times by ant colony with reinforced pheromone relationships," International Journal of Production Economics, Elsevier, vol. 153(C), pages 253-267.
    18. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    19. Sels, Veronique & Craeymeersch, Kjeld & Vanhoucke, Mario, 2011. "A hybrid single and dual population search procedure for the job shop scheduling problem," European Journal of Operational Research, Elsevier, vol. 215(3), pages 512-523, December.
    20. Michelle M. LaMar, 2018. "Markov Decision Process Measurement Model," Psychometrika, Springer;The Psychometric Society, vol. 83(1), pages 67-88, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:16:y:2024:i:8:p:3234-:d:1374658. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.