IDEAS home Printed from https://ideas.repec.org/a/sae/joudef/v19y2022i2p133-144.html
   My bibliography  Save this article

Explainable artificial intelligence for education and training

Author

Listed:
  • Krzysztof Fiok
  • Farzad V Farahani
  • Waldemar Karwowski
  • Tareq Ahram

Abstract

Researchers and software users benefit from the rapid growth of artificial intelligence (AI) to an unprecedented extent in various domains where automated intelligent action is required. However, as they continue to engage with AI, they also begin to understand the limitations and risks associated with ceding control and decision-making to not always transparent artificial computer agents. Understanding of “what is happening in the black box†becomes feasible with explainable AI (XAI) methods designed to mitigate these risks and introduce trust into human-AI interactions. Our study reviews the essential capabilities, limitations, and desiderata of XAI tools developed over recent years and reviews the history of XAI and AI in education (AIED). We present different approaches to AI and XAI from the viewpoint of researchers focused on AIED in comparison with researchers focused on AI and machine learning (ML). We conclude that both groups of interest desire increased efforts to obtain improved XAI tools; however, these groups formulate different target user groups and expectations regarding XAI features and provide different examples of possible achievements. We summarize these viewpoints and provide guidelines for scientists looking to incorporate XAI into their own work.

Suggested Citation

  • Krzysztof Fiok & Farzad V Farahani & Waldemar Karwowski & Tareq Ahram, 2022. "Explainable artificial intelligence for education and training," The Journal of Defense Modeling and Simulation, , vol. 19(2), pages 133-144, April.
  • Handle: RePEc:sae:joudef:v:19:y:2022:i:2:p:133-144
    DOI: 10.1177/15485129211028651
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/15485129211028651
    Download Restriction: no

    File URL: https://libkey.io/10.1177/15485129211028651?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Sebastian Lapuschkin & Stephan Wäldchen & Alexander Binder & Grégoire Montavon & Wojciech Samek & Klaus-Robert Müller, 2019. "Unmasking Clever Hans predictors and assessing what machines really learn," Nature Communications, Nature, vol. 10(1), pages 1-8, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tobias Thomas & Dominik Straub & Fabian Tatai & Megan Shene & Tümer Tosik & Kristian Kersting & Constantin A. Rothkopf, 2024. "Modelling dataset bias in machine-learned theories of economic decision-making," Nature Human Behaviour, Nature, vol. 8(4), pages 679-691, April.
    2. Van Den Hauwe, Ludwig, 2023. "Why Machines Will Not Replace Entrepreneurs. On the Inevitable Limitations of Artificial Intelligence in Economic Life," MPRA Paper 118307, University Library of Munich, Germany.
    3. Martin Obschonka & David B. Audretsch, 2020. "Artificial intelligence and big data in entrepreneurship: a new era has begun," Small Business Economics, Springer, vol. 55(3), pages 529-539, October.
    4. Jerome Friedman & Trevor Hastie & Robert Tibshirani, 2020. "Discussion of “Prediction, Estimation, and Attribution” by Bradley Efron," International Statistical Review, International Statistical Institute, vol. 88(S1), pages 73-74, December.
    5. Xun Li & Dongsheng Chen & Weipan Xu & Haohui Chen & Junjun Li & Fan Mo, 2023. "Explainable dimensionality reduction (XDR) to unbox AI ‘black box’ models: A study of AI perspectives on the ethnic styles of village dwellings," Palgrave Communications, Palgrave Macmillan, vol. 10(1), pages 1-13, December.
    6. March, Christoph, 2021. "Strategic interactions between humans and artificial intelligence: Lessons from experiments with computer players," Journal of Economic Psychology, Elsevier, vol. 87(C).
    7. Young Jae Kim & Jung-Im Na & Seung Seog Han & Chong Hyun Won & Mi Woo Lee & Jung-Won Shin & Chang-Hun Huh & Sung Eun Chang, 2022. "Augmenting the accuracy of trainee doctors in diagnosing skin lesions suspected of skin neoplasms in a real-world setting: A prospective controlled before-and-after study," PLOS ONE, Public Library of Science, vol. 17(1), pages 1-11, January.
    8. Minyoung Lee & Joohyoung Jeon & Hongchul Lee, 2022. "Explainable AI for domain experts: a post Hoc analysis of deep learning for defect classification of TFT–LCD panels," Journal of Intelligent Manufacturing, Springer, vol. 33(6), pages 1747-1759, August.
    9. Christoph March, 2019. "The Behavioral Economics of Artificial Intelligence: Lessons from Experiments with Computer Players," CESifo Working Paper Series 7926, CESifo.
    10. Shane Fox & James McDermott & Edelle Doherty & Ronan Cooney & Eoghan Clifford, 2022. "Application of Neural Networks and Regression Modelling to Enable Environmental Regulatory Compliance and Energy Optimisation in a Sequencing Batch Reactor," Sustainability, MDPI, vol. 14(7), pages 1-28, March.
    11. Verhagen, Mark D., 2021. "Identifying and Improving Functional Form Complexity: A Machine Learning Framework," SocArXiv bka76, Center for Open Science.
    12. Oliver T. Unke & Stefan Chmiela & Michael Gastegger & Kristof T. Schütt & Huziel E. Sauceda & Klaus-Robert Müller, 2021. "SpookyNet: Learning force fields with electronic degrees of freedom and nonlocal effects," Nature Communications, Nature, vol. 12(1), pages 1-14, December.
    13. Diderich, Claude, 2023. "The Truth Behind Artificial Intelligence: Illustrated by Designing an Investment Advice Solution," Journal of Financial Transformation, Capco Institute, vol. 58, pages 116-125.
    14. Minji Lee & Leandro R. D. Sanz & Alice Barra & Audrey Wolff & Jaakko O. Nieminen & Melanie Boly & Mario Rosanova & Silvia Casarotto & Olivier Bodart & Jitka Annen & Aurore Thibaut & Rajanikant Panda &, 2022. "Quantifying arousal and awareness in altered states of consciousness using interpretable deep learning," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    15. Wang, Fujin & Zhao, Zhibin & Zhai, Zhi & Shang, Zuogang & Yan, Ruqiang & Chen, Xuefeng, 2023. "Explainability-driven model improvement for SOH estimation of lithium-ion battery," Reliability Engineering and System Safety, Elsevier, vol. 232(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:joudef:v:19:y:2022:i:2:p:133-144. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.