IDEAS home Printed from https://ideas.repec.org/p/feb/artefa/00777.html
   My bibliography  Save this paper

Generation Next: Experimentation with AI

Author

Listed:
  • Gary Charness
  • Brian Jabarian
  • John List

Abstract

We investigate the potential for Large Language Models (LLMs) to enhance scientific practice within experimentation by identifying key areas, directions, and implications. First, we discuss how these models can improve experimental design, including improving the elicitation wording, coding experiments, and producing documentation. Second, we discuss the implementation of experiments using LLMs, focusing on enhancing causal inference by creating consistent experiences, improving comprehension of instructions, and monitoring participant engagement in real time. Third, we highlight how LLMs can help analyze experimental data, including pre-processing, data cleaning, and other analytical tasks while helping reviewers and replicators investigate studies. Each of these tasks improves the probability of reporting accurate findings. Finally, we recommend a scientific governance blueprint that manages the potential risks of using LLMs for experimental research while promoting their benefits. This could pave the way for open science opportunities and foster a culture of policy and industry experimentation at scale.

Suggested Citation

  • Gary Charness & Brian Jabarian & John List, 2023. "Generation Next: Experimentation with AI," Artefactual Field Experiments 00777, The Field Experiments Website.
  • Handle: RePEc:feb:artefa:00777
    as

    Download full text from publisher

    File URL: http://s3.amazonaws.com/fieldexperiments-papers2/papers/00777.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Susan Athey & Michael Luca, 2019. "Economists (and Economics) in Tech Companies," Journal of Economic Perspectives, American Economic Association, vol. 33(1), pages 209-230, Winter.
    2. Gary Charness & Guillaume R. Frechette & John H. Kagel, 2004. "How Robust is Laboratory Gift Exchange?," Experimental Economics, Springer;Economic Science Association, vol. 7(2), pages 189-205, June.
    3. Matthew O. Jackson, 2009. "Networks and Economic Behavior," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 489-513, May.
    4. Korinek, Anton, 2023. "Language Models and Cognitive Automation for Economic Research," CEPR Discussion Papers 17923, C.E.P.R. Discussion Papers.
    5. Luigi Butera & Philip Grossman & Daniel Houser & John List & Marie-Claire Villeval, 2020. "A New Mechanism to Alleviate the Crises of Confidence in Science - With an Application to the Public Goods Game," Artefactual Field Experiments 00684, The Field Experiments Website.
    6. Deaton, Angus & Cartwright, Nancy, 2018. "Understanding and misunderstanding randomized controlled trials," Social Science & Medicine, Elsevier, vol. 210(C), pages 2-21.
    7. Guillaume R. Fréchette & Kim Sarnoff & Leeat Yariv, 2022. "Experimental Economics: Past and Future," Annual Review of Economics, Annual Reviews, vol. 14(1), pages 777-794, August.
    8. Richard A. Bettis, 2012. "The search for asterisks: Compromised statistical tests and flawed theories," Strategic Management Journal, Wiley Blackwell, vol. 33(1), pages 108-113, January.
    9. Colin F. Camerer, 2018. "Artificial Intelligence and Behavioral Economics," NBER Chapters, in: The Economics of Artificial Intelligence: An Agenda, pages 587-608, National Bureau of Economic Research, Inc.
    10. Gordon Pennycook & Ziv Epstein & Mohsen Mosleh & Antonio A. Arechar & Dean Eckles & David G. Rand, 2021. "Shifting attention to accuracy can reduce misinformation online," Nature, Nature, vol. 592(7855), pages 590-595, April.
    11. Erik Brynjolfsson & Danielle Li & Lindsey R. Raymond, 2023. "Generative AI at Work," NBER Working Papers 31161, National Bureau of Economic Research, Inc.
    12. Luigi Butera & Philip J Grossman & Daniel Houser & John A List & Marie Claire Villeval, 2020. "A New Mechanism to Alleviate the Crises of Confidence in Science With An Application to the Public Goods GameA Review," Working Papers halshs-02512932, HAL.
    13. Jake M. Hofman & Duncan J. Watts & Susan Athey & Filiz Garip & Thomas L. Griffiths & Jon Kleinberg & Helen Margetts & Sendhil Mullainathan & Matthew J. Salganik & Simine Vazire & Alessandro Vespignani, 2021. "Integrating explanation and prediction in computational social science," Nature, Nature, vol. 595(7866), pages 181-188, July.
    14. Erik Snowberg & Leeat Yariv, 2021. "Testing the Waters: Behavior across Participant Pools," American Economic Review, American Economic Association, vol. 111(2), pages 687-719, February.
    15. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    16. Alex Davies & Petar Veličković & Lars Buesing & Sam Blackwell & Daniel Zheng & Nenad Tomašev & Richard Tanburn & Peter Battaglia & Charles Blundell & András Juhász & Marc Lackenby & Geordie Williamson, 2021. "Advancing mathematics by guiding human intuition with AI," Nature, Nature, vol. 600(7887), pages 70-74, December.
    17. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Samuel Chang & Andrew Kennedy & Aaron Leonard & John List, 2024. "12 Best Practices for Leveraging Generative AI in Experimental Research," Artefactual Field Experiments 00796, The Field Experiments Website.
    2. Brian Jabarian, 2024. "Large Language Models for Behavioral Economics: Internal Validity and Elicitation of Mental Models," Papers 2407.12032, arXiv.org.
    3. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    4. Rosa-García, Alfonso, 2024. "Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video," MPRA Paper 120135, University Library of Munich, Germany.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Evangelos Katsamakas, 2024. "Business models for the simulation hypothesis," Papers 2404.08991, arXiv.org.
    2. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    3. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    4. Samuel Chang & Andrew Kennedy & Aaron Leonard & John A. List, 2024. "12 Best Practices for Leveraging Generative AI in Experimental Research," NBER Working Papers 33025, National Bureau of Economic Research, Inc.
    5. Takeuchi, Ai & Seki, Erika, 2023. "Coordination and free-riding problems in the provision of multiple public goods," Journal of Economic Behavior & Organization, Elsevier, vol. 206(C), pages 95-121.
    6. Rosa-García, Alfonso, 2024. "Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video," MPRA Paper 120135, University Library of Munich, Germany.
    7. John A. List, 2024. "Optimally generate policy-based evidence before scaling," Nature, Nature, vol. 626(7999), pages 491-499, February.
    8. Kirshner, Samuel N., 2024. "GPT and CLT: The impact of ChatGPT's level of abstraction on consumer recommendations," Journal of Retailing and Consumer Services, Elsevier, vol. 76(C).
    9. Elias Bouacida & Renaud Foucart, 2022. "Rituals of Reason," Working Papers 344119591, Lancaster University Management School, Economics Department.
    10. Eszter Czibor & David Jimenez‐Gomez & John A. List, 2019. "The Dozen Things Experimental Economists Should Do (More of)," Southern Economic Journal, John Wiley & Sons, vol. 86(2), pages 371-432, October.
    11. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    12. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    13. Gruner, Sven & Lehberger, Mira & Hirschauer, Norbert & Mußhoff, Oliver, 2022. "How (un)informative are experiments with students for other social groups? A study of agricultural students and farmers," Australian Journal of Agricultural and Resource Economics, Australian Agricultural and Resource Economics Society, vol. 66(03), January.
    14. John A. List & Azeem M. Shaikh & Atom Vayalinkal, 2023. "Multiple testing with covariate adjustment in experimental economics," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 38(6), pages 920-939, September.
    15. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    16. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    17. Brodeur, Abel & Cook, Nikolai & Hartley, Jonathan & Heyes, Anthony, 2022. "Do Pre-Registration and Pre-analysis Plans Reduce p-Hacking and Publication Bias?," MetaArXiv uxf39, Center for Open Science.
    18. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    19. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    20. Siting Estee Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org, revised Oct 2024.

    More about this item

    JEL classification:

    • C0 - Mathematical and Quantitative Methods - - General
    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General
    • C80 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - General
    • C82 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Methodology for Collecting, Estimating, and Organizing Macroeconomic Data; Data Access
    • C87 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Econometric Software
    • C9 - Mathematical and Quantitative Methods - - Design of Experiments
    • C90 - Mathematical and Quantitative Methods - - Design of Experiments - - - General
    • C92 - Mathematical and Quantitative Methods - - Design of Experiments - - - Laboratory, Group Behavior
    • C99 - Mathematical and Quantitative Methods - - Design of Experiments - - - Other

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:feb:artefa:00777. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Francesca Pagnotta (email available below). General contact details of provider: http://www.fieldexperiments.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.