Author
Listed:
- Heitor Hoffman Nakashima
- Daielly Mantovani
- Celso Machado Junior
Abstract
Purpose - This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts. Design/methodology/approach - The study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models. Findings - The data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees. Research limitations/implications - The study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions. Originality/value - Other studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.
Suggested Citation
Heitor Hoffman Nakashima & Daielly Mantovani & Celso Machado Junior, 2022.
"Users’ trust in black-box machine learning algorithms,"
Revista de Gestão, Emerald Group Publishing Limited, vol. 31(2), pages 237-250, October.
Handle:
RePEc:eme:regepp:rege-06-2022-0100
DOI: 10.1108/REGE-06-2022-0100
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eme:regepp:rege-06-2022-0100. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Emerald Support (email available below). General contact details of provider: .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.