IDEAS home Printed from https://ideas.repec.org/a/gam/jrisks/v12y2024i10p164-d1499310.html
   My bibliography  Save this article

Credit Risk Assessment and Financial Decision Support Using Explainable Artificial Intelligence

Author

Listed:
  • M. K. Nallakaruppan

    (Balaji Institute of Modern Management, Sri Balaji University, Pune 411033, India)

  • Himakshi Chaturvedi

    (School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632104, India)

  • Veena Grover

    (Department of Management, Noida Institute of Engineering and Technology, Noida 201310, India)

  • Balamurugan Balusamy

    (Associate Dean-Student Affiars, Shiv Nadar University, Noida 201314, India)

  • Praveen Jaraut

    (Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru 560035, India)

  • Jitendra Bahadur

    (Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru 560035, India)

  • V. P. Meena

    (Department of Electrical Engineering, National Institute of Technology Jamshedpur, Jamshedpur 831014, India)

  • Ibrahim A. Hameed

    (Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Larsgardsvegen, 2, 6009 Alesund, Norway)

Abstract

The greatest technological transformation the world has ever seen was brought about by artificial intelligence (AI). It presents significant opportunities for the financial sector to enhance risk management, democratize financial services, ensure consumer protection, and improve customer experience. Modern machine learning models are more accessible than ever, but it has been challenging to create and implement systems that support real-world financial applications, primarily due to their lack of transparency and explainability—both of which are essential for building trustworthy technology. The novelty of this study lies in the development of an explainable AI (XAI) model that not only addresses these transparency concerns but also serves as a tool for policy development in credit risk management. By offering a clear understanding of the underlying factors influencing AI predictions, the proposed model can assist regulators and financial institutions in shaping data-driven policies, ensuring fairness, and enhancing trust. This study proposes an explainable AI model for credit risk management, specifically aimed at quantifying the risks associated with credit borrowing through peer-to-peer lending platforms. The model leverages Shapley values to generate AI predictions based on key explanatory variables. The decision tree and random forest models achieved the highest accuracy levels of 0.89 and 0.93, respectively. The model’s performance was further tested using a larger dataset, where it maintained stable accuracy levels, with the decision tree and random forest models reaching accuracies of 0.90 and 0.93, respectively. To ensure reliable explainable AI (XAI) modeling, these models were chosen due to the binary classification nature of the problem. LIME and SHAP were employed to present the XAI models as both local and global surrogates.

Suggested Citation

  • M. K. Nallakaruppan & Himakshi Chaturvedi & Veena Grover & Balamurugan Balusamy & Praveen Jaraut & Jitendra Bahadur & V. P. Meena & Ibrahim A. Hameed, 2024. "Credit Risk Assessment and Financial Decision Support Using Explainable Artificial Intelligence," Risks, MDPI, vol. 12(10), pages 1-18, October.
  • Handle: RePEc:gam:jrisks:v:12:y:2024:i:10:p:164-:d:1499310
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-9091/12/10/164/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-9091/12/10/164/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Bastos, João A. & Matos, Sara M., 2022. "Explainable models of credit losses," European Journal of Operational Research, Elsevier, vol. 301(1), pages 386-394.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. González, Marta Ramos & Ureña, Antonio Partal & Fernández-Aguado, Pilar Gómez, 2023. "Forecasting for regulatory credit loss derived from the COVID-19 pandemic: A machine learning approach," Research in International Business and Finance, Elsevier, vol. 64(C).
    2. Sun, Weixin & Zhang, Xuantao & Li, Minghao & Wang, Yong, 2023. "Interpretable high-stakes decision support system for credit default forecasting," Technological Forecasting and Social Change, Elsevier, vol. 196(C).
    3. Janssens, Bram & Schetgen, Lisa & Bogaert, Matthias & Meire, Matthijs & Van den Poel, Dirk, 2024. "360 Degrees rumor detection: When explanations got some explaining to do," European Journal of Operational Research, Elsevier, vol. 317(2), pages 366-381.
    4. Piccialli, Veronica & Romero Morales, Dolores & Salvatore, Cecilia, 2024. "Supervised feature compression based on counterfactual analysis," European Journal of Operational Research, Elsevier, vol. 317(2), pages 273-285.
    5. Xiong Xiong & Fan Yang & Li Su, 2023. "Popularity, face and voice: Predicting and interpreting livestreamers' retail performance using machine learning techniques," Papers 2310.19200, arXiv.org.
    6. Petter Eilif de Lange & Borger Melsom & Christian Bakke Vennerød & Sjur Westgaard, 2022. "Explainable AI for Credit Assessment in Banks," JRFM, MDPI, vol. 15(12), pages 1-23, November.
    7. Nazemi, Abdolreza & Fabozzi, Frank J., 2024. "Interpretable machine learning for creditor recovery rates," Journal of Banking & Finance, Elsevier, vol. 164(C).
    8. Kraus, Mathias & Tschernutter, Daniel & Weinzierl, Sven & Zschech, Patrick, 2024. "Interpretable generalized additive neural networks," European Journal of Operational Research, Elsevier, vol. 317(2), pages 303-316.
    9. De Bock, Koen W. & Coussement, Kristof & Caigny, Arno De & Słowiński, Roman & Baesens, Bart & Boute, Robert N. & Choi, Tsan-Ming & Delen, Dursun & Kraus, Mathias & Lessmann, Stefan & Maldonado, Sebast, 2024. "Explainable AI for Operational Research: A defining framework, methods, applications, and a research agenda," European Journal of Operational Research, Elsevier, vol. 317(2), pages 249-272.
    10. Koen W. de Bock & Kristof Coussement & Arno De Caigny & Roman Slowiński & Bart Baesens & Robert N Boute & Tsan-Ming Choi & Dursun Delen & Mathias Kraus & Stefan Lessmann & Sebastián Maldonado & David , 2023. "Explainable AI for Operational Research: A Defining Framework, Methods, Applications, and a Research Agenda," Post-Print hal-04219546, HAL.
    11. Julia Brasse & Hanna Rebecca Broder & Maximilian Förster & Mathias Klier & Irina Sigler, 2023. "Explainable artificial intelligence in information systems: A review of the status quo and future research directions," Electronic Markets, Springer;IIM University of St. Gallen, vol. 33(1), pages 1-30, December.
    12. Ahmed, Abdulaziz & Topuz, Kazim & Moqbel, Murad & Abdulrashid, Ismail, 2024. "What makes accidents severe! explainable analytics framework with parameter optimization," European Journal of Operational Research, Elsevier, vol. 317(2), pages 425-436.
    13. Thuy, Arthur & Benoit, Dries F., 2024. "Explainability through uncertainty: Trustworthy decision-making with neural networks," European Journal of Operational Research, Elsevier, vol. 317(2), pages 330-340.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jrisks:v:12:y:2024:i:10:p:164-:d:1499310. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.