IDEAS home Printed from https://ideas.repec.org/p/osf/osfxxx/udz28.html
   My bibliography  Save this paper

Fine-Tuning Large Language Models to Simulate German Voting Behaviour (Working Paper)

Author

Listed:
  • Holtdirk, Tobias
  • Assenmacher, Dennis
  • Bleier, Arnim
  • Wagner, Claudia

Abstract

Surveys are a cornerstone of empirical social science research, providing invaluable insights into the opinions, beliefs, behaviours, and characteristics of people. However, issues such as refusal to participate, skipping questions, sampling bias, and attrition significantly impact the quality and reliability of survey data. Recently, researchers have started investigating the potential of Large Language Models (LLMs) to role-play a pre-defined set of "characters" and simulate their survey responses with little or no additional training data and costs. While previous research on forecasting, imputing, and simulating survey answers with LLMs has focused on zero-shot and few-shot approaches, this study investigates the viability of fine-tuning LLMs to simulate responses of survey participants. We fine-tune Large Language Models (LLMs) on subsets of the data from the German Longitudinal Election Study (GLES) and evaluate their predictive performance on the "vote choice" for a random set of held-out participants. We compare the LLMs' performance against various baseline methods. Our findings show that small, fine-tuned open-source LLMs can outperform zero-shot predictions of larger LLMs. They are able to match the performance of established tabular data classifiers, are more sample efficient, and outperform them in cases with systematic non-responses. This study contributes to the growing body of research on LLMs for simulating survey data by demonstrating the effectiveness of fine-tuning approaches.

Suggested Citation

  • Holtdirk, Tobias & Assenmacher, Dennis & Bleier, Arnim & Wagner, Claudia, 2024. "Fine-Tuning Large Language Models to Simulate German Voting Behaviour (Working Paper)," OSF Preprints udz28, Center for Open Science.
  • Handle: RePEc:osf:osfxxx:udz28
    DOI: 10.31219/osf.io/udz28
    as

    Download full text from publisher

    File URL: https://osf.io/download/6702c880f240037f8fa2009a/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/udz28?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    2. Bergmann, Knut & Diermeier, Matthias, 2017. "Die AfD: Eine unterschätzte Partei. Soziale Erwünschtheit als Erklärung für fehlerhafte Prognosen," IW-Reports 7/2017, Institut der deutschen Wirtschaft (IW) / German Economic Institute.
    3. Murray Shanahan & Kyle McDonell & Laria Reynolds, 2023. "Role play with large language models," Nature, Nature, vol. 623(7987), pages 493-498, November.
    4. Schmitt-Beck, Rüdiger & Roßteutscher, Sigrid & Schoen, Harald & Weßels, Bernhard & Wolf, Christof, 2022. "The Changing German Voter," EconStor Open Access Articles and Book Chapters, ZBW - Leibniz Information Centre for Economics, pages 313-336.
    5. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.
    2. Kirshner, Samuel N., 2024. "GPT and CLT: The impact of ChatGPT's level of abstraction on consumer recommendations," Journal of Retailing and Consumer Services, Elsevier, vol. 76(C).
    3. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    4. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    5. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    6. Evangelos Katsamakas, 2024. "Business models for the simulation hypothesis," Papers 2404.08991, arXiv.org.
    7. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    8. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    9. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    10. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    11. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    12. Siting Estee Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org, revised Oct 2024.
    13. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    14. Ayato Kitadai & Sinndy Dayana Rico Lugo & Yudai Tsurusaki & Yusuke Fukasawa & Nariaki Nishino, 2024. "Can AI with High Reasoning Ability Replicate Human-like Decision Making in Economic Experiments?," Papers 2406.11426, arXiv.org.
    15. Bauer, Kevin & Liebich, Lena & Hinz, Oliver & Kosfeld, Michael, 2023. "Decoding GPT's hidden "rationality" of cooperation," SAFE Working Paper Series 401, Leibniz Institute for Financial Research SAFE.
    16. Van Pham & Scott Cunningham, 2024. "Can Base ChatGPT be Used for Forecasting without Additional Optimization?," Papers 2404.07396, arXiv.org, revised Jul 2024.
    17. Philip Brookins & Jason DeBacker, 2024. "Playing games with GPT: What can we learn about a large language model from canonical strategic games?," Economics Bulletin, AccessEcon, vol. 44(1), pages 25-37.
    18. Kevin Leyton-Brown & Paul Milgrom & Neil Newman & Ilya Segal, 2023. "Artificial Intelligence and Market Design: Lessons Learned from Radio Spectrum Reallocation," NBER Chapters, in: New Directions in Market Design, National Bureau of Economic Research, Inc.
    19. Zengqing Wu & Run Peng & Xu Han & Shuyuan Zheng & Yixin Zhang & Chuan Xiao, 2023. "Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations," Papers 2311.06330, arXiv.org, revised Dec 2023.
    20. Joshua C. Yang & Damian Dailisan & Marcin Korecki & Carina I. Hausladen & Dirk Helbing, 2024. "LLM Voting: Human Choices and AI Collective Decision Making," Papers 2402.01766, arXiv.org, revised Aug 2024.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:osfxxx:udz28. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.