IDEAS home Printed from https://ideas.repec.org/a/eee/jbrese/v176y2024ics0148296324001140.html
   My bibliography  Save this article

How consumers respond to service failures caused by algorithmic mistakes: The role of algorithmic interpretability

Author

Listed:
  • Chen, Changdong

Abstract

Despite the advancement of algorithm-based AI transforming business and society, there is growing evidence of service failures caused by algorithmic mistakes. Due to the “black box” nature of algorithmic decisions, consumers are frustrated not only by the mistakes themselves but also by the lack of interpretability of algorithmic decisions. Thus, the current research focuses on the impact of enhanced algorithmic interpretability through Explainable Artificial Intelligence (XAI) approaches (e.g., post-hoc explanations) on consumer reactions to service failures resulting from algorithmic mistakes. Across four experimental studies, the authors demonstrate that consumers react less negatively to service failures caused by algorithmic (rather than human) mistakes when algorithmic interpretability is enhanced. This effect is primarily due to reduced blame assigned to algorithms. Furthermore, they show that the beneficial effect disappears when algorithms are employed for an objective (vs. a subjective) task and when algorithms are at a weak (vs. strong) intelligence stage.

Suggested Citation

  • Chen, Changdong, 2024. "How consumers respond to service failures caused by algorithmic mistakes: The role of algorithmic interpretability," Journal of Business Research, Elsevier, vol. 176(C).
  • Handle: RePEc:eee:jbrese:v:176:y:2024:i:c:s0148296324001140
    DOI: 10.1016/j.jbusres.2024.114610
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0148296324001140
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.jbusres.2024.114610?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Chiara Longoni & Andrea Bonezzi & Carey K Morewedge, 2019. "Resistance to Medical Artificial Intelligence," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 46(4), pages 629-650.
    2. Arun Rai, 2020. "Explainable AI: from black box to glass box," Journal of the Academy of Marketing Science, Springer, vol. 48(1), pages 137-141, January.
    3. Tripat Gill & Eileen Fischer & Amna Kirmani & Pankaj Aggarwal, 2020. "Blame It on the Self-Driving Car: How Autonomous Vehicles Can Alter Consumer Morality [When Brands Seem Human, Do Humans Act like Brands? Automatic Behavioral Priming Effects of Brand Anthropomorph," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 47(2), pages 272-291.
    4. Romain Cadario & Chiara Longoni & Carey K. Morewedge, 2021. "Understanding, explaining, and utilizing medical artificial intelligence," Nature Human Behaviour, Nature, vol. 5(12), pages 1636-1642, December.
    5. Phyliss Jia Gai & Stefano Puntoni & Margaret C Campbell & Peter R Darke, 2021. "Language and Consumer Dishonesty: A Self-Diagnosticity Theory," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 48(2), pages 333-351.
    6. Edmond Awad & Sydney Levine & Max Kleiman-Weiner & Sohan Dsouza & Joshua B. Tenenbaum & Azim Shariff & Jean-François Bonnefon & Iyad Rahwan, 2020. "Drivers are blamed more than their automated cars when both make mistakes," Nature Human Behaviour, Nature, vol. 4(2), pages 134-143, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Leah Warfield Smith & Randall Lee Rose & Alex R. Zablah & Heath McCullough & Mohammad “Mike” Saljoughian, 2023. "Examining post-purchase consumer responses to product automation," Journal of the Academy of Marketing Science, Springer, vol. 51(3), pages 530-550, May.
    2. Zhao, Taiyang & Ran, Yaxuan & Wu, Banggang & Lynette Wang, Valerie & Zhou, Liying & Lu Wang, Cheng, 2024. "Virtual versus human: Unraveling consumer reactions to service failures through influencer types," Journal of Business Research, Elsevier, vol. 178(C).
    3. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    4. Yang, Yikai & Zheng, Jiehui & Yu, Yining & Qiu, Yiling & Wang, Lei, 2024. "The role of recommendation sources and attribute framing in online product recommendations," Journal of Business Research, Elsevier, vol. 174(C).
    5. Manjunath Padigar & Ljubomir Pupovac & Ashish Sinha & Rajendra Srivastava, 2022. "The effect of marketing department power on investor responses to announcements of AI-embedded new product innovations," Journal of the Academy of Marketing Science, Springer, vol. 50(6), pages 1277-1298, November.
    6. Chenfeng Yan & Quan Chen & Xinyue Zhou & Xin Dai & Zhilin Yang, 2024. "When the Automated fire Backfires: The Adoption of Algorithm-based HR Decision-making Could Induce Consumer’s Unfavorable Ethicality Inferences of the Company," Journal of Business Ethics, Springer, vol. 190(4), pages 841-859, April.
    7. Ming-Hui Huang & Roland T. Rust, 2021. "A strategic framework for artificial intelligence in marketing," Journal of the Academy of Marketing Science, Springer, vol. 49(1), pages 30-50, January.
    8. Wang, Cuicui & Li, Yiyang & Fu, Weizhong & Jin, Jia, 2023. "Whether to trust chatbots: Applying the event-related approach to understand consumers’ emotional experiences in interactions with chatbots in e-commerce," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).
    9. Huang, Xiaozhi & Wu, Xitong & Cao, Xin & Wu, Jifei, 2023. "The effect of medical artificial intelligence innovation locus on consumer adoption of new products," Technological Forecasting and Social Change, Elsevier, vol. 197(C).
    10. Florian Pethig & Julia Kroenung, 2023. "Biased Humans, (Un)Biased Algorithms?," Journal of Business Ethics, Springer, vol. 183(3), pages 637-652, March.
    11. Grewal, Dhruv & Guha, Abhijit & Satornino, Cinthia B. & Schweiger, Elisa B., 2021. "Artificial intelligence: The light and the darkness," Journal of Business Research, Elsevier, vol. 136(C), pages 229-236.
    12. Erik Hermann, 2022. "Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective," Journal of Business Ethics, Springer, vol. 179(1), pages 43-61, August.
    13. Tinglong Dai & Sridhar Tayur, 2022. "Designing AI‐augmented healthcare delivery systems for physician buy‐in and patient acceptance," Production and Operations Management, Production and Operations Management Society, vol. 31(12), pages 4443-4451, December.
    14. Siliang Tong & Nan Jia & Xueming Luo & Zheng Fang, 2021. "The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance," Strategic Management Journal, Wiley Blackwell, vol. 42(9), pages 1600-1631, September.
    15. Nan Zhang & Heng Xu, 2024. "Fairness of Ratemaking for Catastrophe Insurance: Lessons from Machine Learning," Information Systems Research, INFORMS, vol. 35(2), pages 469-488, June.
    16. Qian, Lixian & Yin, Juelin & Huang, Youlin & Liang, Ya, 2023. "The role of values and ethics in influencing consumers’ intention to use autonomous vehicle hailing services," Technological Forecasting and Social Change, Elsevier, vol. 188(C).
    17. Chamaret, Cécile & Steyer, Véronique & Mayer, Julie C., 2020. "“Hands off my meter!” when municipalities resist smart meters: Linking arguments and degrees of resistance," Energy Policy, Elsevier, vol. 144(C).
    18. Maude Lavanchy & Patrick Reichert & Jayanth Narayanan & Krishna Savani, 2023. "Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures," Journal of Business Ethics, Springer, vol. 188(1), pages 125-150, November.
    19. Tse, Tiffany Tsz Kwan & Hanaki, Nobuyuki & Mao, Bolin, 2024. "Beware the performance of an algorithm before relying on it: Evidence from a stock price forecasting experiment," Journal of Economic Psychology, Elsevier, vol. 102(C).
    20. Nika Meyer (née Mozafari) & Melanie Schwede & Maik Hammerschmidt & Welf Hermann Weiger, 2022. "Users taking the blame? How service failure, recovery, and robot design affect user attributions and retention," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2491-2505, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:jbrese:v:176:y:2024:i:c:s0148296324001140. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/jbusres .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.