IDEAS home Printed from https://ideas.repec.org/p/rco/dpaper/438.html
   My bibliography  Save this paper

Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making

Author

Listed:
  • Daniela Sele

    (ETH)

  • Marina Chugunova

    (Max Planck Institute for Innovation and Competition)

Abstract

Are people algorithm averse, as some previous literature indicates? If so, can the retention of human oversight increase the uptake of algorithmic recommendations, and does keeping a human in the loop improve accuracy? Answers to these questions are of utmost importance given the fast-growing availability of algorithmic recommendations and current intense discussions about regulation of automated decision-making. In an online experiment, we find that 66% of participants prefer algorithmic to equally accurate human recommendations if the decision is delegated fully. This preference for algorithms increases by further 7 percentage points if participants are able to monitor and adjust the recommendations before the decision is made. In line with automation bias, participants adjust the recommendations that stem from an algorithm by less than those from another human. Importantly, participants are less likely to intervene with the least accurate recommendations and adjust them by less, raising concerns about the monitoring ability of a human in a Human-in-the-Loop system. Our results document a trade-off: while allowing people to adjust algorithmic recommendations increases their uptake, the adjustments made by the human monitors reduce the quality of final decisions.

Suggested Citation

  • Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
  • Handle: RePEc:rco:dpaper:438
    as

    Download full text from publisher

    File URL: https://rationality-and-competition.de/wp-content/uploads/discussion_paper/438.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArXiv 97upg, Center for Open Science.
    2. Berkeley J. Dietvorst & Joseph P. Simmons & Cade Massey, 2018. "Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them," Management Science, INFORMS, vol. 64(3), pages 1155-1170, March.
    3. Jon Kleinberg & Himabindu Lakkaraju & Jure Leskovec & Jens Ludwig & Sendhil Mullainathan, 2018. "Human Decisions and Machine Predictions," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 133(1), pages 237-293.
    4. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gorny, Paul M. & Groos, Eva & Strobel, Christina, 2024. "Do Personalized AI Predictions Change Subsequent Decision-Outcomes? The Impact of Human Oversight," MPRA Paper 121065, University Library of Munich, Germany.
    2. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    3. Scott Schanke & Gordon Burtch & Gautam Ray, 2021. "Estimating the Impact of “Humanizing” Customer Service Chatbots," Information Systems Research, INFORMS, vol. 32(3), pages 736-751, September.
    4. Kevin Bauer & Andrej Gill, 2024. "Mirror, Mirror on the Wall: Algorithmic Assessments, Transparency, and Self-Fulfilling Prophecies," Information Systems Research, INFORMS, vol. 35(1), pages 226-248, March.
    5. Keding, Christoph & Meissner, Philip, 2021. "Managerial overreliance on AI-augmented decision-making processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisions," Technological Forecasting and Social Change, Elsevier, vol. 171(C).
    6. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    7. Dargnies, Marie-Pierre & Hakimov, Rustamdjan & Kübler, Dorothea, 2022. "Aversion to hiring algorithms: Transparency, gender profiling, and self-confidence," Discussion Papers, Research Unit: Market Behavior SP II 2022-202, WZB Berlin Social Science Center.
    8. Talia Gillis & Bryce McLaughlin & Jann Spiess, 2021. "On the Fairness of Machine-Assisted Human Decisions," Papers 2110.15310, arXiv.org, revised Sep 2023.
    9. Said Kaawach & Oskar Kowalewski & Oleksandr Talavera, 2023. "Automatic vs Manual Investing: Role of Past Performance," Discussion Papers 23-04, Department of Economics, University of Birmingham.
    10. Saravanan Kesavan & Tarun Kushwaha, 2020. "Field Experiment on the Profit Implications of Merchants’ Discretionary Power to Override Data-Driven Decision-Making Tools," Management Science, INFORMS, vol. 66(11), pages 5182-5190, November.
    11. Fumagalli, Elena & Rezaei, Sarah & Salomons, Anna, 2022. "OK computer: Worker perceptions of algorithmic recruitment," Research Policy, Elsevier, vol. 51(2).
    12. Vomberg, Arnd & Schauerte, Nico & Krakowski, Sebastian & Ingram Bogusz, Claire & Gijsenberg, Maarten J. & Bleier, Alexander, 2023. "The cold-start problem in nascent AI strategy: Kickstarting data network effects," Journal of Business Research, Elsevier, vol. 168(C).
    13. Bansak, Kirk & Paulson, Elisabeth, 2023. "Public Opinion on Fairness and Efficiency for Algorithmic and Human Decision-Makers," OSF Preprints pghmx, Center for Open Science.
    14. Maria De‐Arteaga & Stefan Feuerriegel & Maytal Saar‐Tsechansky, 2022. "Algorithmic fairness in business analytics: Directions for research and practice," Production and Operations Management, Production and Operations Management Society, vol. 31(10), pages 3749-3770, October.
    15. Bauer, Kevin & von Zahn, Moritz & Hinz, Oliver, 2022. "Expl(AI)ned: The impact of explainable Artificial Intelligence on cognitive processes," SAFE Working Paper Series 315, Leibniz Institute for Financial Research SAFE, revised 2022.
    16. Sophie-Charlotte Klose & Johannes Lederer, 2020. "A Pipeline for Variable Selection and False Discovery Rate Control With an Application in Labor Economics," Papers 2006.12296, arXiv.org, revised Jun 2020.
    17. Dionissi Aliprantis & Hal Martin & Kristen Tauber, 2020. "What Determines the Success of Housing Mobility Programs?," Working Papers 20-36R, Federal Reserve Bank of Cleveland, revised 19 Oct 2022.
    18. Michael Vössing & Niklas Kühl & Matteo Lind & Gerhard Satzger, 2022. "Designing Transparency for Effective Human-AI Collaboration," Information Systems Frontiers, Springer, vol. 24(3), pages 877-895, June.
    19. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    20. Yucheng Yang & Zhong Zheng & Weinan E, 2020. "Interpretable Neural Networks for Panel Data Analysis in Economics," Papers 2010.05311, arXiv.org, revised Nov 2020.

    More about this item

    Keywords

    automated decision-making; algorithm aversion; algorithm appreciation; automation bias;
    All these keywords.

    JEL classification:

    • O33 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights - - - Technological Change: Choices and Consequences; Diffusion Processes
    • C90 - Mathematical and Quantitative Methods - - Design of Experiments - - - General
    • D90 - Microeconomics - - Micro-Based Behavioral Economics - - - General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:rco:dpaper:438. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Viviana Lalli (email available below). General contact details of provider: https://rationality-and-competition.de .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.