IDEAS home Printed from https://ideas.repec.org/p/smo/raiswp/0451.html
   My bibliography  Save this paper

Political Bias in Large Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude

Author

Listed:
  • Tavishi Choudhary

    (Greenwich High, Greenwich, Connecticut, US)

Abstract

Artificial Intelligence large language models have rapidly gained widespread adoption, sparking discussions on their societal and political impact, especially for political bias and its far-reaching consequences on society and citizens. This study explores the political bias in large language models by conducting a comparative analysis across four popular AI mod-els—ChatGPT-4, Perplexity, Google Gemini, and Claude. This research systematically evaluates their responses to politically charged prompts and questions from the Pew Research Center’s Political Typology Quiz, Political Compass Quiz, and ISideWith Quiz. The findings revealed that ChatGPT-4 and Claude exhibit a liberal bias, Perplexity is more conservative, while Google Gemini adopts more centrist stances based on their training data sets. The presence of such biases underscores the critical need for transparency in AI development and the incorporation of diverse training datasets, regular audits, and user education to mitigate any of these biases. The most significant question surrounding political bias in AI is its consequences, particularly its influence on public discourse, policy-making, and democratic processes. The results of this study advocate for ethical implications for the development of AI models and the need for transparency to build trust and integrity in AI models. Additionally, future research directions have been outlined to explore and address the complex AI bias issue.

Suggested Citation

  • Tavishi Choudhary, 2024. "Political Bias in Large Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude," RAIS Conference Proceedings 2022-2024 0451, Research Association for Interdisciplinary Studies.
  • Handle: RePEc:smo:raiswp:0451
    as

    Download full text from publisher

    File URL: https://rais.education/wp-content/uploads/2024/10/0451.pdf
    File Function: Full text
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Kate Crawford & Ryan Calo, 2016. "There is a blind spot in AI research," Nature, Nature, vol. 538(7625), pages 311-313, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Nils Köbis & Jean-François Bonnefon & Iyad Rahwan, 2021. "Bad machines corrupt good morals," Nature Human Behaviour, Nature, vol. 5(6), pages 679-685, June.
    2. Khalid Alshehhi & Ali Cheaitou & Hamad Rashid, 2024. "Procurement of Artificial Intelligence Systems in UAE Public Sectors: An Interpretive Structural Modeling of Critical Success Factors," Sustainability, MDPI, vol. 16(17), pages 1-20, September.
    3. Salih Tutun & Marina E. Johnson & Abdulaziz Ahmed & Abdullah Albizri & Sedat Irgil & Ilker Yesilkaya & Esma Nur Ucar & Tanalp Sengun & Antoine Harfouche, 2023. "An AI-based Decision Support System for Predicting Mental Health Disorders," Information Systems Frontiers, Springer, vol. 25(3), pages 1261-1276, June.
    4. Veljko Dubljevic & George List & Jovan Milojevich & Nirav Ajmeri & William A Bauer & Munindar P Singh & Eleni Bardaka & Thomas A Birkland & Charles H W Edwards & Roger C Mayer & Ioan Muntean & Thomas , 2021. "Toward a rational and ethical sociotechnical system of autonomous vehicles: A novel application of multi-criteria decision analysis," PLOS ONE, Public Library of Science, vol. 16(8), pages 1-17, August.
    5. Fábio Duarte & Ricardo Álvarez, 2019. "The data politics of the urban age," Palgrave Communications, Palgrave Macmillan, vol. 5(1), pages 1-7, December.
    6. Emilio M. Santandreu & Joaquín López Pascual & Salvador Cruz Rambaud, 2020. "Determinants of Repayment among Male and Female Microcredit Clients in the USA. An Approach Based on Managers’ Perceptions," Sustainability, MDPI, vol. 12(5), pages 1-17, February.
    7. Noah Castelo & Adrian F Ward, 2021. "Conservatism predicts aversion to consequential Artificial Intelligence," PLOS ONE, Public Library of Science, vol. 16(12), pages 1-19, December.
    8. Jon Truby, 2020. "Governing Artificial Intelligence to benefit the UN Sustainable Development Goals," Sustainable Development, John Wiley & Sons, Ltd., vol. 28(4), pages 946-959, July.
    9. Hemant Jain & Balaji Padmanabhan & Paul A. Pavlou & T. S. Raghu, 2021. "Editorial for the Special Section on Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society," Information Systems Research, INFORMS, vol. 32(3), pages 675-687, September.
    10. Buhmann, Alexander & Fieseler, Christian, 2021. "Towards a deliberative framework for responsible innovation in artificial intelligence," Technology in Society, Elsevier, vol. 64(C).
    11. Naudé, Wim & Dimitri, Nicola, 2021. "Public Procurement and Innovation for Human-Centered Artificial Intelligence," IZA Discussion Papers 14021, Institute of Labor Economics (IZA).
    12. Hashmi, Nada & Bal, Anjali S., 2024. "Generative AI in higher education and beyond," Business Horizons, Elsevier, vol. 67(5), pages 607-614.
    13. Latham, Alan & Nattrass, Michael, 2019. "Autonomous vehicles, car-dominated environments, and cycling: Using an ethnography of infrastructure to reflect on the prospects of a new transportation technology," Journal of Transport Geography, Elsevier, vol. 81(C).
    14. Jean-Marie John-Mathews & Dominique Cardon & Christine Balagué, 2022. "From Reality to World. A Critical Perspective on AI Fairness," Journal of Business Ethics, Springer, vol. 178(4), pages 945-959, July.
    15. Michael Haenlein & Ming-Hui Huang & Andreas Kaplan, 2022. "Guest Editorial: Business Ethics in the Era of Artificial Intelligence," Journal of Business Ethics, Springer, vol. 178(4), pages 867-869, July.
    16. Mateos-Garcia, Juan, 2017. "To Err is Algorithm: Algorithmic fallibility and economic organisation," SocArXiv xuvf9, Center for Open Science.
    17. Mateos-Garcia, Juan, 2017. "To Err is Algorithm: Algorithmic fallibility and economic organisation," SocArXiv xuvf9_v1, Center for Open Science.
    18. Roman Lukyanenko & Wolfgang Maass & Veda C. Storey, 2022. "Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 1993-2020, December.
    19. Steve J. Bickley & Alison Macintyre & Benno Torgler, 2021. "Safety in Smart, Livable Cities: Acknowledging the Human Factor," CREMA Working Paper Series 2021-17, Center for Research in Economics, Management and the Arts (CREMA).
    20. Peter Seele & Mario D. Schultz, 2022. "From Greenwashing to Machinewashing: A Model and Future Directions Derived from Reasoning by Analogy," Journal of Business Ethics, Springer, vol. 178(4), pages 1063-1089, July.

    More about this item

    Keywords

    Large language models (LLM); Generative AI (GenAI); AI Governance and Policy; Ethical AI Systems;
    All these keywords.

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:smo:raiswp:0451. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Eduard David (email available below). General contact details of provider: http://rais.education/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.