IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i9p1604-d811240.html
   My bibliography  Save this article

Playful Probes for Design Interaction with Machine Learning: A Tool for Aircraft Condition-Based Maintenance Planning and Visualisation

Author

Listed:
  • Jorge Ribeiro

    (CISUC—Centre Informatics and Systems, Informatics Engineering Department, University of Coimbra, 3004-531 Coimbra, Portugal)

  • Pedro Andrade

    (CISUC—Centre Informatics and Systems, Informatics Engineering Department, University of Coimbra, 3004-531 Coimbra, Portugal)

  • Manuel Carvalho

    (CISUC—Centre Informatics and Systems, Informatics Engineering Department, University of Coimbra, 3004-531 Coimbra, Portugal)

  • Catarina Silva

    (CISUC—Centre Informatics and Systems, Informatics Engineering Department, University of Coimbra, 3004-531 Coimbra, Portugal)

  • Bernardete Ribeiro

    (CISUC—Centre Informatics and Systems, Informatics Engineering Department, University of Coimbra, 3004-531 Coimbra, Portugal)

  • Licínio Roque

    (CISUC—Centre Informatics and Systems, Informatics Engineering Department, University of Coimbra, 3004-531 Coimbra, Portugal)

Abstract

Aircraft maintenance is a complex domain where designing new systems that include Machine Learning (ML) algorithms can become a challenge. In the context of designing a tool for Condition-Based Maintenance (CBM) in aircraft maintenance planning, this case study addresses (1) the use of Playful Probing approach to obtain insights that allow understanding of how to design for interaction with ML algorithms, (2) the integration of a Reinforcement Learning (RL) agent for Human–AI collaboration in maintenance planning and (3) the visualisation of CBM indicators. Using a design science research approach, we designed a Playful Probe protocol and materials, and evaluated results by running a participatory design workshop. Our main contribution is to show how to elicit ideas for integration of maintenance planning practices with ML estimation tools and the RL agent. Through a participatory design workshop with participants’ observation, in which they played with CBM artefacts, Playful Probes favour the elicitation of user interaction requirements with the RL planning agent to aid the planner to obtain a reliable maintenance plan and turn possible to understand how to represent CBM indicators and visualise them through a trajectory prediction.

Suggested Citation

  • Jorge Ribeiro & Pedro Andrade & Manuel Carvalho & Catarina Silva & Bernardete Ribeiro & Licínio Roque, 2022. "Playful Probes for Design Interaction with Machine Learning: A Tool for Aircraft Condition-Based Maintenance Planning and Visualisation," Mathematics, MDPI, vol. 10(9), pages 1-20, May.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:9:p:1604-:d:811240
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/9/1604/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/9/1604/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Xiao Wang & Hongwei Wang & Chao Qi, 2016. "Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system," Journal of Intelligent Manufacturing, Springer, vol. 27(2), pages 325-333, April.
    2. Stephane R. A. Barde & Soumaya Yacout & Hayong Shin, 2019. "Optimal preventive maintenance policy based on reinforcement learning of a fleet of military trucks," Journal of Intelligent Manufacturing, Springer, vol. 30(1), pages 147-161, January.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Correa-Jullian, Camila & López Droguett, Enrique & Cardemil, José Miguel, 2020. "Operation scheduling in a solar thermal system: A reinforcement learning-based framework," Applied Energy, Elsevier, vol. 268(C).
    2. Yang, Hongbing & Li, Wenchao & Wang, Bin, 2021. "Joint optimization of preventive maintenance and production scheduling for multi-state production systems based on reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 214(C).
    3. Ashish Kumar & Roussos Dimitrakopoulos & Marco Maulen, 2020. "Adaptive self-learning mechanisms for updating short-term production decisions in an industrial mining complex," Journal of Intelligent Manufacturing, Springer, vol. 31(7), pages 1795-1811, October.
    4. Wu, Tianyi & Yang, Li & Ma, Xiaobing & Zhang, Zihan & Zhao, Yu, 2020. "Dynamic maintenance strategy with iteratively updated group information," Reliability Engineering and System Safety, Elsevier, vol. 197(C).
    5. Wei, Shuaichong & Nourelfath, Mustapha & Nahas, Nabil, 2023. "Analysis of a production line subject to degradation and preventive maintenance," Reliability Engineering and System Safety, Elsevier, vol. 230(C).
    6. Pedro J. Rivera Torres & Eileen I. Serrano Mercado & Orestes Llanes Santiago & Luis Anido Rifón, 2018. "Modeling preventive maintenance of manufacturing processes with probabilistic Boolean networks with interventions," Journal of Intelligent Manufacturing, Springer, vol. 29(8), pages 1941-1952, December.
    7. Barlow, E. & Bedford, T. & Revie, M. & Tan, J. & Walls, L., 2021. "A performance-centred approach to optimising maintenance of complex systems," European Journal of Operational Research, Elsevier, vol. 292(2), pages 579-595.
    8. Yuanju Qu & Zengtao Hou, 2022. "Degradation principle of machines influenced by maintenance," Journal of Intelligent Manufacturing, Springer, vol. 33(5), pages 1521-1530, June.
    9. Ye, Zhenggeng & Cai, Zhiqiang & Yang, Hui & Si, Shubin & Zhou, Fuli, 2023. "Joint optimization of maintenance and quality inspection for manufacturing networks based on deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 236(C).
    10. Cheng, Jianda & Cheng, Minghui & Liu, Yan & Wu, Jun & Li, Wei & Frangopol, Dan M., 2024. "Knowledge transfer for adaptive maintenance policy optimization in engineering fleets based on meta-reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 247(C).
    11. Zhang, Ning & Qi, Faqun & Zhang, Chengjie & Zhou, Hongming, 2022. "Joint optimization of condition-based maintenance policy and buffer capacity for a two-unit series system," Reliability Engineering and System Safety, Elsevier, vol. 219(C).
    12. Mohammadi, Reza & He, Qing, 2022. "A deep reinforcement learning approach for rail renewal and maintenance planning," Reliability Engineering and System Safety, Elsevier, vol. 225(C).
    13. Andreas Kuhnle & Jan-Philipp Kaiser & Felix Theiß & Nicole Stricker & Gisela Lanza, 2021. "Designing an adaptive production control system using reinforcement learning," Journal of Intelligent Manufacturing, Springer, vol. 32(3), pages 855-876, March.
    14. Michele Compare & Luca Bellani & Enrico Cobelli & Enrico Zio & Francesco Annunziata & Fausto Carlevaro & Marzia Sepe, 2020. "A reinforcement learning approach to optimal part flow management for gas turbine maintenance," Journal of Risk and Reliability, , vol. 234(1), pages 52-62, February.
    15. Qinming Liu & Ming Dong & Wenyuan Lv & Chunming Ye, 2019. "Manufacturing system maintenance based on dynamic programming model with prognostics information," Journal of Intelligent Manufacturing, Springer, vol. 30(3), pages 1155-1173, March.
    16. Johannes Dornheim & Lukas Morand & Samuel Zeitvogel & Tarek Iraki & Norbert Link & Dirk Helm, 2022. "Deep reinforcement learning methods for structure-guided processing path optimization," Journal of Intelligent Manufacturing, Springer, vol. 33(1), pages 333-352, January.
    17. A. Khatab & C. Diallo & E.-H. Aghezzaf & U. Venkatadri, 2022. "Optimization of the integrated fleet-level imperfect selective maintenance and repairpersons assignment problem," Journal of Intelligent Manufacturing, Springer, vol. 33(3), pages 703-718, March.
    18. Zheng, Meimei & Su, Zhiyun & Wang, Dong & Pan, Ershun, 2024. "Joint maintenance and spare part ordering from multiple suppliers for multicomponent systems using a deep reinforcement learning algorithm," Reliability Engineering and System Safety, Elsevier, vol. 241(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:9:p:1604-:d:811240. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.