IDEAS home Printed from https://ideas.repec.org/p/dpr/wpaper/1233r.html
   My bibliography  Save this paper

Do people rely on ChatGPT more than their peers to detect deepfake news?

Author

Listed:
  • Yuhao Fu
  • Nobuyuki Hanaki

Abstract

This experimental study investigates whether people rely more on ChatGPT (GPT-4) than on their human peers when detecting AI-generated fake news (deepfake news). In multiple rounds of deepfake detection tasks conducted in a laboratory setting, student participants exhibited a greater reliance on ChatGPT compared to their peers. We explored this over-reliance on AI from two perspectives: the weight of advice (WOA) and the decomposition of reliance (DOR) into two stages. Our analysis indicates that reliance on external advice is primarily influenced by the source and quality of the advice, as well as the subjects’ prior beliefs, knowledge, and experience, while the type of news and time spent on tasks have no effect. Additionally, our study indicates a potential sequential mechanism of advice utilization, wherein the advice source affects reliance in both stages—activation and integration—whereas the quality of the advice, along with knowledge and experience, influences only the second stage. Our findings suggest that relying on AI to detect AI may not be detrimental and could, in fact, contribute to a deeper understanding of human-AI interaction and support advancements in AI development during the Generative Artificial Intelligence (GAI) era.

Suggested Citation

  • Yuhao Fu & Nobuyuki Hanaki, 2024. "Do people rely on ChatGPT more than their peers to detect deepfake news?," ISER Discussion Paper 1233r, Institute of Social and Economic Research, Osaka University, revised Dec 2024.
  • Handle: RePEc:dpr:wpaper:1233r
    as

    Download full text from publisher

    File URL: https://www.iser.osaka-u.ac.jp/library/dp/2024/DP1233R.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ben Greiner, 2015. "Subject pool recruitment procedures: organizing experiments with ORSEE," Journal of the Economic Science Association, Springer;Economic Science Association, vol. 1(1), pages 114-125, July.
    2. Christopher Whyte, 2020. "Deepfake news: AI-enabled disinformation as a multi-level public policy challenge," Journal of Cyber Policy, Taylor & Francis Journals, vol. 5(2), pages 199-217, May.
    3. Mesbah, Neda & Tauchert, Christoph & Buxmann, Peter, 2021. "Whose Advice Counts More – Man or Machine? An Experimental Investigation of AI-based Advice Utilization," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 124796, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    4. Maggioni, Mario A. & Rossignoli, Domenico, 2023. "If it looks like a human and speaks like a human ... Communication and cooperation in strategic Human–Robot interactions," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 104(C).
    5. Nikhil Agarwal & Alex Moehring & Pranav Rajpurkar & Tobias Salz, 2023. "Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology," NBER Working Papers 31422, National Bureau of Economic Research, Inc.
    6. Margarita Leib & Nils Köbis & Rainer Michael Rilke & Marloes Hagens & Bernd Irlenbusch, 2024. "Corrupted by Algorithms? How AI-generated and Human-written Advice Shape (Dis)honesty," The Economic Journal, Royal Economic Society, vol. 134(658), pages 766-784.
    7. Tse, Tiffany Tsz Kwan & Hanaki, Nobuyuki & Mao, Bolin, 2024. "Beware the performance of an algorithm before relying on it: Evidence from a stock price forecasting experiment," Journal of Economic Psychology, Elsevier, vol. 102(C).
    8. Berkeley J. Dietvorst & Joseph P. Simmons & Cade Massey, 2018. "Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them," Management Science, INFORMS, vol. 64(3), pages 1155-1170, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. repec:dpr:wpaper:1233 is not listed on IDEAS
    2. Ivanova-Stenzel, Radosveta & Tolksdorf, Michel, 2024. "Measuring preferences for algorithms — How willing are people to cede control to algorithms?," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 112(C).
    3. Tse, Tiffany Tsz Kwan & Hanaki, Nobuyuki & Mao, Bolin, 2024. "Beware the performance of an algorithm before relying on it: Evidence from a stock price forecasting experiment," Journal of Economic Psychology, Elsevier, vol. 102(C).
    4. Ivanova-Stenzel, Radosveta & Tolksdorf, Michel, 2023. "Measuring Preferences for Algorithms - Are people really algorithm averse after seeing the algorithm perform?," VfS Annual Conference 2023 (Regensburg): Growth and the "sociale Frage" 277692, Verein für Socialpolitik / German Economic Association.
    5. Greiner, Ben & Grünwald, Philipp & Lindner, Thomas & Lintner, Georg & Wiernsperger, Martin, 2024. "Incentives, Framing, and Reliance on Algorithmic Advice: An Experimental Study," Department for Strategy and Innovation Working Paper Series 01/2024, WU Vienna University of Economics and Business.
    6. Back, Camila & Morana, Stefan & Spann, Martin, 2023. "When do robo-advisors make us better investors? The impact of social design elements on investor behavior," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 103(C).
    7. Wendelin Schnedler & Nina Lucia Stephan, 2020. "Revisiting a Remedy Against Chains of Unkindness," Schmalenbach Business Review, Springer;Schmalenbach-Gesellschaft, vol. 72(3), pages 347-364, July.
    8. David J. Cooper & Krista Saral & Marie Claire Villeval, 2021. "Why Join a Team?," Management Science, INFORMS, vol. 67(11), pages 6980-6997, November.
    9. Zakaria Babutsidze & Nobuyuki Hanaki & Adam Zylbersztejn, 2019. "Digital Communication and Swift Trust," Post-Print halshs-02409314, HAL.
    10. Galliera, Arianna, 2018. "Self-selecting random or cumulative pay? A bargaining experiment," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 72(C), pages 106-120.
    11. Julien Jacob & Eve-Angéline Lambert & Mathieu Lefebvre & Sarah Driessche, 2023. "Information disclosure under liability: an experiment on public bads," Social Choice and Welfare, Springer;The Society for Social Choice and Welfare, vol. 61(1), pages 155-197, July.
    12. Simon Gaechter & Chris Starmer & Fabio Tufano, 2022. "Measuring “group cohesion” to reveal the power of social relationships in team production," Discussion Papers 2022-12, The Centre for Decision Research and Experimental Economics, School of Economics, University of Nottingham.
    13. Gantner, Anita & Horn, Kristian & Kerschbamer, Rudolf, 2016. "Fair and efficient division through unanimity bargaining when claims are subjective," Journal of Economic Psychology, Elsevier, vol. 57(C), pages 56-73.
    14. Buser, Thomas & Ranehill, Eva & van Veldhuizen, Roel, 2021. "Gender differences in willingness to compete: The role of public observability," Journal of Economic Psychology, Elsevier, vol. 83(C).
    15. Ro’i Zultan & Eldar Dadon, 2023. "Missing the forest for the trees: when monitoring quantitative measures distorts task prioritization," Working Papers 2319, Ben-Gurion University of the Negev, Department of Economics.
    16. Kessel, Dany & Mollerstrom, Johanna & van Veldhuizen, Roel, 2021. "Can simple advice eliminate the gender gap in willingness to compete?," EconStor Open Access Articles and Book Chapters, ZBW - Leibniz Information Centre for Economics, vol. 138, pages 1-1.
    17. Aycinena, Diego & Bogliacino, Francesco & Kimbrough, Erik O., 2024. "Measuring norms: Assessing the threat of social desirability bias to the Bicchieri and Xiao elicitation method," Journal of Economic Behavior & Organization, Elsevier, vol. 222(C), pages 225-239.
    18. Ismaël Rafaï & Sébastien Duchêne & Eric Guerci & Irina Basieva & Andrei Khrennikov, 2022. "The triple-store experiment: a first simultaneous test of classical and quantum probabilities in choice over menus," Theory and Decision, Springer, vol. 92(2), pages 387-406, March.
    19. Matthew Embrey & Friederike Mengel & Ronald Peeters, 2019. "Strategy revision opportunities and collusion," Experimental Economics, Springer;Economic Science Association, vol. 22(4), pages 834-856, December.
    20. M. Djiguemde & D. Dubois & A. Sauquet & M. Tidball, 2022. "Continuous Versus Discrete Time in Dynamic Common Pool Resource Game Experiments," Environmental & Resource Economics, Springer;European Association of Environmental and Resource Economists, vol. 82(4), pages 985-1014, August.
    21. Daniel Woods & Mustafa Abdallah & Saurabh Bagchi & Shreyas Sundaram & Timothy Cason, 2022. "Network defense and behavioral biases: an experimental study," Experimental Economics, Springer;Economic Science Association, vol. 25(1), pages 254-286, February.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:dpr:wpaper:1233r. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Librarian (email available below). General contact details of provider: https://edirc.repec.org/data/isosujp.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.