IDEAS home Printed from https://ideas.repec.org/a/taf/tjorxx/v73y2022i1p70-90.html
   My bibliography  Save this article

Transparency, auditability, and explainability of machine learning models in credit scoring

Author

Listed:
  • Michael Bücker
  • Gero Szepannek
  • Alicja Gosiewska
  • Przemyslaw Biecek

Abstract

A major requirement for credit scoring models is to provide a maximally accurate risk prediction. Additionally, regulators demand these models to be transparent and auditable. Thus, in credit scoring, very simple predictive models such as logistic regression or decision trees are still widely used and the superior predictive power of modern machine learning algorithms cannot be fully leveraged. Significant potential is therefore missed, leading to higher reserves or more credit defaults. This article works out different dimensions that have to be considered for making credit scoring models understandable and presents a framework for making “black box” machine learning models transparent, auditable, and explainable. Following this framework, we present an overview of techniques, demonstrate how they can be applied in credit scoring and how results compare to the interpretability of scorecards. A real world case study shows that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.

Suggested Citation

  • Michael Bücker & Gero Szepannek & Alicja Gosiewska & Przemyslaw Biecek, 2022. "Transparency, auditability, and explainability of machine learning models in credit scoring," Journal of the Operational Research Society, Taylor & Francis Journals, vol. 73(1), pages 70-90, January.
  • Handle: RePEc:taf:tjorxx:v:73:y:2022:i:1:p:70-90
    DOI: 10.1080/01605682.2021.1922098
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/01605682.2021.1922098
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/01605682.2021.1922098?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yang, Fan & Abedin, Mohammad Zoynul & Hajek, Petr, 2024. "An explainable federated learning and blockchain-based secure credit modeling method," European Journal of Operational Research, Elsevier, vol. 317(2), pages 449-467.
    2. Babaei, Golnoosh & Giudici, Paolo & Raffinetti, Emanuela, 2023. "Explainable FinTech lending," Journal of Economics and Business, Elsevier, vol. 125.
    3. Gero Szepannek, 2022. "An Overview on the Landscape of R Packages for Open Source Scorecard Modelling," Risks, MDPI, vol. 10(3), pages 1-33, March.
    4. Ghosh, Indranil & Jana, Rabin K. & David, Roubaud & Grebinevych, Oksana & Wanke, Peter & Tan, Yong, 2024. "Modelling financial stress during the COVID-19 pandemic: Prediction and deeper insights," International Review of Economics & Finance, Elsevier, vol. 91(C), pages 680-698.
    5. Yang Liu & Fei Huang & Lili Ma & Qingguo Zeng & Jiale Shi, 2024. "Credit scoring prediction leveraging interpretable ensemble learning," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 43(2), pages 286-308, March.
    6. Andrés Alonso & José Manuel Carbó, 2022. "Accuracy of explanations of machine learning models for credit decisions," Working Papers 2222, Banco de España.
    7. Chen, Dangxing & Ye, Jiahui & Ye, Weicheng, 2023. "Interpretable selective learning in credit risk," Research in International Business and Finance, Elsevier, vol. 65(C).
    8. Janssens, Bram & Schetgen, Lisa & Bogaert, Matthias & Meire, Matthijs & Van den Poel, Dirk, 2024. "360 Degrees rumor detection: When explanations got some explaining to do," European Journal of Operational Research, Elsevier, vol. 317(2), pages 366-381.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:tjorxx:v:73:y:2022:i:1:p:70-90. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/tjor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.