IDEAS home Printed from https://ideas.repec.org/a/nat/nathum/v9y2025i2d10.1038_s41562-024-02077-2.html
   My bibliography  Save this article

How human–AI feedback loops alter human perceptual, emotional and social judgements

Author

Listed:
  • Moshe Glickman

    (University College London
    University College London)

  • Tali Sharot

    (University College London
    University College London
    Massachusetts Institute of Technology)

Abstract

Artificial intelligence (AI) technologies are rapidly advancing, enhancing human capabilities across various fields spanning from finance to medicine. Despite their numerous advantages, AI systems can exhibit biased judgements in domains ranging from perception to emotion. Here, in a series of experiments (n = 1,401 participants), we reveal a feedback loop where human–AI interactions alter processes underlying human perceptual, emotional and social judgements, subsequently amplifying biases in humans. This amplification is significantly greater than that observed in interactions between humans, due to both the tendency of AI systems to amplify biases and the way humans perceive AI systems. Participants are often unaware of the extent of the AI’s influence, rendering them more susceptible to it. These findings uncover a mechanism wherein AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgement escalate into much larger ones.

Suggested Citation

  • Moshe Glickman & Tali Sharot, 2025. "How human–AI feedback loops alter human perceptual, emotional and social judgements," Nature Human Behaviour, Nature, vol. 9(2), pages 345-359, February.
  • Handle: RePEc:nat:nathum:v:9:y:2025:i:2:d:10.1038_s41562-024-02077-2
    DOI: 10.1038/s41562-024-02077-2
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41562-024-02077-2
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41562-024-02077-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Bin Zhou & Sen Pei & Lev Muchnik & Xiangyi Meng & Xiaoke Xu & Alon Sela & Shlomo Havlin & H. Eugene Stanley, 2020. "Realistic modelling of information spread using peer-to-peer diffusion patterns," Nature Human Behaviour, Nature, vol. 4(11), pages 1198-1207, November.
    2. Ma, Liye & Sun, Baohong, 2020. "Machine learning and AI in marketing – Connecting computing power to human insights," International Journal of Research in Marketing, Elsevier, vol. 37(3), pages 481-504.
    3. Logg, Jennifer M. & Minson, Julia A. & Moore, Don A., 2019. "Algorithm appreciation: People prefer algorithmic to human judgment," Organizational Behavior and Human Decision Processes, Elsevier, vol. 151(C), pages 90-103.
    4. Carey K. Morewedge & Sendhil Mullainathan & Haaya F. Naushan & Cass R. Sunstein & Jon Kleinberg & Manish Raghavan & Jens O. Ludwig, 2023. "Human bias in algorithm design," Nature Human Behaviour, Nature, vol. 7(11), pages 1822-1824, November.
    5. Dan Bang & Rani Moran & Nathaniel D. Daw & Stephen M. Fleming, 2022. "Neurocomputational mechanisms of confidence in self and others," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Volkmar, Gioia & Fischer, Peter M. & Reinecke, Sven, 2022. "Artificial Intelligence and Machine Learning: Exploring drivers, barriers, and future developments in marketing management," Journal of Business Research, Elsevier, vol. 149(C), pages 599-614.
    2. Hermann, Erik & Puntoni, Stefano, 2024. "Artificial intelligence and consumer behavior: From predictive to generative AI," Journal of Business Research, Elsevier, vol. 180(C).
    3. Yongping Bao & Ludwig Danwitz & Fabian Dvorak & Sebastian Fehrler & Lars Hornuf & Hsuan Yu Lin & Bettina von Helversen, 2022. "Similarity and Consistency in Algorithm-Guided Exploration," CESifo Working Paper Series 10188, CESifo.
    4. Yoganathan, Vignesh & Osburg, Victoria-Sophie, 2024. "The mind in the machine: Estimating mind perception's effect on user satisfaction with voice-based conversational agents," Journal of Business Research, Elsevier, vol. 175(C).
    5. Daniel Woods & Mustafa Abdallah & Saurabh Bagchi & Shreyas Sundaram & Timothy Cason, 2022. "Network defense and behavioral biases: an experimental study," Experimental Economics, Springer;Economic Science Association, vol. 25(1), pages 254-286, February.
    6. Siliang Tong & Nan Jia & Xueming Luo & Zheng Fang, 2021. "The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance," Strategic Management Journal, Wiley Blackwell, vol. 42(9), pages 1600-1631, September.
    7. Christoph Riedl & Eric Bogert, 2024. "Effects of AI Feedback on Learning, the Skill Gap, and Intellectual Diversity," Papers 2409.18660, arXiv.org.
    8. Ghosh, Sourav & Yadav, Sarita & Devi, Ambika & Thomas, Tiju, 2022. "Techno-economic understanding of Indian energy-storage market: A perspective on green materials-based supercapacitor technologies," Renewable and Sustainable Energy Reviews, Elsevier, vol. 161(C).
    9. Bryce McLaughlin & Jann Spiess, 2022. "Algorithmic Assistance with Recommendation-Dependent Preferences," Papers 2208.07626, arXiv.org, revised Jan 2024.
    10. Markus Jung & Mischa Seiter, 2021. "Towards a better understanding on mitigating algorithm aversion in forecasting: an experimental study," Journal of Management Control: Zeitschrift für Planung und Unternehmenssteuerung, Springer, vol. 32(4), pages 495-516, December.
    11. Gómez de Ágreda, Ángel, 2020. "Ethics of autonomous weapons systems and its applicability to any AI systems," Telecommunications Policy, Elsevier, vol. 44(6).
    12. Yao, Xintong & Xi, Yipeng, 2024. "Pathways linking expectations for AI chatbots to loyalty: A moderated mediation analysis," Technology in Society, Elsevier, vol. 78(C).
    13. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    14. Carlson, Keith & Kopalle, Praveen K. & Riddell, Allen & Rockmore, Daniel & Vana, Prasad, 2023. "Complementing human effort in online reviews: A deep learning approach to automatic content generation and review synthesis," International Journal of Research in Marketing, Elsevier, vol. 40(1), pages 54-74.
    15. Justina Sidlauskiene & Yannick Joye & Vilte Auruskeviciene, 2023. "AI-based chatbots in conversational commerce and their effects on product and price perceptions," Electronic Markets, Springer;IIM University of St. Gallen, vol. 33(1), pages 1-21, December.
    16. Johannes Habel & Sascha Alavi & Nicolas Heinitz, 2023. "A theory of predictive sales analytics adoption," AMS Review, Springer;Academy of Marketing Science, vol. 13(1), pages 34-54, June.
    17. Shiri Melumad & Rhonda Hadi & Christian Hildebrand & Adrian F. Ward, 2020. "Technology-Augmented Choice: How Digital Innovations Are Transforming Consumer Decision Processes," Customer Needs and Solutions, Springer;Institute for Sustainable Innovation and Growth (iSIG), vol. 7(3), pages 90-101, October.
    18. repec:cup:judgdm:v:15:y:2020:i:3:p:449-451 is not listed on IDEAS
    19. Wang, Xun & Rodrigues, Vasco Sanchez & Demir, Emrah & Sarkis, Joseph, 2024. "Algorithm aversion during disruptions: The case of safety stock," International Journal of Production Economics, Elsevier, vol. 278(C).
    20. Kevin Bauer & Andrej Gill, 2024. "Mirror, Mirror on the Wall: Algorithmic Assessments, Transparency, and Self-Fulfilling Prophecies," Information Systems Research, INFORMS, vol. 35(1), pages 226-248, March.
    21. Jean-Pierre Benoît & Juan Dubra & Giorgia Romagnoli, 2022. "Belief Elicitation When More than Money Matters: Controlling for "Control"," American Economic Journal: Microeconomics, American Economic Association, vol. 14(3), pages 837-888, August.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nathum:v:9:y:2025:i:2:d:10.1038_s41562-024-02077-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.