IDEAS home Printed from https://ideas.repec.org/p/osf/osfxxx/r3qng_v1.html
   My bibliography  Save this paper

A primer for the use of classifier and generative large language models in social science research

Author

Listed:
  • Cova, Joshua
  • Schmitz, Luuk

Abstract

The emergence of generative AI models is rapidly changing the social sciences. Much has now been written on the ethics and epistemological considerations of using these tools. Meanwhile, AI-powered research increasingly makes its way to preprint servers. However, we see a gap between ethics and practice: while many researchers would like to use these tools, few if any guides on how to do so exist. This paper fills this gap by providing users with a hands-on application written in accessible language. The paper deals with what we consider the most likely and advanced use case for AI in the social sciences: text annotation and classification. Our application guides readers through setting up a text classification pipeline and evaluating the results. The most important considerations concern reproducibility and transparency, open-source versus closed-source models, as well as the difference between classifier and generative models. The take-home message is this: these models provide unprecedented scale to augment research, but the community must take seriousely open-source and locally deployable models in the interest of open science principles. Our code to reproduce the example can be accessed via Github.

Suggested Citation

  • Cova, Joshua & Schmitz, Luuk, 2024. "A primer for the use of classifier and generative large language models in social science research," OSF Preprints r3qng_v1, Center for Open Science.
  • Handle: RePEc:osf:osfxxx:r3qng_v1
    DOI: 10.31219/osf.io/r3qng_v1
    as

    Download full text from publisher

    File URL: https://osf.io/download/6764b5f734e328c181af0fed/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/r3qng_v1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Benoit, Kenneth & Conway, Drew & Lauderdale, Benjamin E. & Laver, Michael & Mikhaylov, Slava, 2016. "Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data," American Political Science Review, Cambridge University Press, vol. 110(2), pages 278-295, May.
    2. Laurer, Moritz & van Atteveldt, Wouter & Casas, Andreu & Welbers, Kasper, 2024. "Less Annotating, More Classifying: Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT-NLI," Political Analysis, Cambridge University Press, vol. 32(1), pages 84-100, January.
    3. Grimmer, Justin & Stewart, Brandon M., 2013. "Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts," Political Analysis, Cambridge University Press, vol. 21(3), pages 267-297, July.
    4. Zobel, Malisa & Lehmann, Pola, 2018. "Positions and saliency of immigration in party manifestos: A novel dataset using crowd coding," EconStor Open Access Articles and Book Chapters, ZBW - Leibniz Information Centre for Economics, vol. 57(4), pages 1056-1083.
    5. Argyle, Lisa P. & Busby, Ethan C. & Fulda, Nancy & Gubler, Joshua R. & Rytting, Christopher & Wingate, David, 2023. "Out of One, Many: Using Language Models to Simulate Human Samples," Political Analysis, Cambridge University Press, vol. 31(3), pages 337-351, July.
    6. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    7. Anton Korinek, 2023. "Generative AI for Economic Research: Use Cases and Implications for Economists," Journal of Economic Literature, American Economic Association, vol. 61(4), pages 1281-1317, December.
    8. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Cova, Joshua & Schmitz, Luuk, 2024. "A primer for the use of classifier and generative large language models in social science research," OSF Preprints r3qng, Center for Open Science.
    2. Navid Ghaffarzadegan & Aritra Majumdar & Ross Williams & Niyousha Hosseinichimeh, 2024. "Generative agent‐based modeling: an introduction and tutorial," System Dynamics Review, System Dynamics Society, vol. 40(1), January.
    3. Samuel Chang & Andrew Kennedy & Aaron Leonard & John A. List, 2024. "12 Best Practices for Leveraging Generative AI in Experimental Research," NBER Working Papers 33025, National Bureau of Economic Research, Inc.
    4. Shumiao Ouyang & Hayong Yun & Xingjian Zheng, 2024. "How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs," Papers 2406.01168, arXiv.org, revised Aug 2024.
    5. Niyousha Hosseinichimeh & Aritra Majumdar & Ross Williams & Navid Ghaffarzadegan, 2024. "From text to map: a system dynamics bot for constructing causal loop diagrams," System Dynamics Review, System Dynamics Society, vol. 40(3), July.
    6. Rosa-García, Alfonso, 2024. "Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video," MPRA Paper 120135, University Library of Munich, Germany.
    7. Kirshner, Samuel N., 2024. "GPT and CLT: The impact of ChatGPT's level of abstraction on consumer recommendations," Journal of Retailing and Consumer Services, Elsevier, vol. 76(C).
    8. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    9. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    10. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    11. Evangelos Katsamakas, 2024. "Business models for the simulation hypothesis," Papers 2404.08991, arXiv.org.
    12. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.
    13. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    14. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    15. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    16. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    17. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    18. Siting Estee Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org, revised Oct 2024.
    19. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    20. Ayato Kitadai & Sinndy Dayana Rico Lugo & Yudai Tsurusaki & Yusuke Fukasawa & Nariaki Nishino, 2024. "Can AI with High Reasoning Ability Replicate Human-like Decision Making in Economic Experiments?," Papers 2406.11426, arXiv.org.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:osfxxx:r3qng_v1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.