IDEAS home Printed from https://ideas.repec.org/a/inm/ormsom/v27y2025i2p354-368.html
   My bibliography  Save this article

A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?

Author

Listed:
  • Yang Chen

    (Ivey Business School, Western University, London, Ontario N6G 0N1, Canada)

  • Samuel N. Kirshner

    (University of New South Wales Business School, University of New South Wales, Sydney, New South Wales 2052, Australia)

  • Anton Ovchinnikov

    (Smith School of Business, Queen’s University, Kingston, Ontario K7L 3N6, Canada; and INSEAD, 77300 Fontainebleau, France)

  • Meena Andiappan

    (DeGroote School of Business, McMaster University, Hamilton, Ontario L8S 4M4, Canada; and Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario M5T 3M6, Canada)

  • Tracy Jenkin

    (Smith School of Business, Queen’s University, Kingston, Ontario K7L 3N6, Canada; and Vector Institute, Toronto, Ontario M5G 0C6, Canada)

Abstract

Problem definition : Large language models (LLMs) are being increasingly leveraged in business and consumer decision-making processes. Because LLMs learn from human data and feedback, which can be biased, determining whether LLMs exhibit human-like behavioral decision biases (e.g., base-rate neglect, risk aversion, confirmation bias, etc.) is crucial prior to implementing LLMs into decision-making contexts and workflows. To understand this, we examine 18 common human biases that are important in operations management (OM) using the dominant LLM, ChatGPT. Methodology/results : We perform experiments where GPT-3.5 and GPT-4 act as participants to test these biases using vignettes adapted from the literature (“standard context”) and variants reframed in inventory and general OM contexts. In almost half of the experiments, Generative Pre-trained Transformer (GPT) mirrors human biases, diverging from prototypical human responses in the remaining experiments. We also observe that GPT models have a notable level of consistency between the standard and OM-specific experiments as well as across temporal versions of the GPT-3.5 model. Our comparative analysis between GPT-3.5 and GPT-4 reveals a dual-edged progression of GPT’s decision making, wherein GPT-4 advances in decision-making accuracy for problems with well-defined mathematical solutions while simultaneously displaying increased behavioral biases for preference-based problems. Managerial implications : First, our results highlight that managers will obtain the greatest benefits from deploying GPT to workflows leveraging established formulas. Second, that GPT displayed a high level of response consistency across the standard, inventory, and non-inventory operational contexts provides optimism that LLMs can offer reliable support even when details of the decision and problem contexts change. Third, although selecting between models, like GPT-3.5 and GPT-4, represents a trade-off in cost and performance, our results suggest that managers should invest in higher-performing models, particularly for solving problems with objective solutions.

Suggested Citation

  • Yang Chen & Samuel N. Kirshner & Anton Ovchinnikov & Meena Andiappan & Tracy Jenkin, 2025. "A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?," Manufacturing & Service Operations Management, INFORMS, vol. 27(2), pages 354-368, March.
  • Handle: RePEc:inm:ormsom:v:27:y:2025:i:2:p:354-368
    DOI: 10.1287/msom.2023.0279
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/msom.2023.0279
    Download Restriction: no

    File URL: https://libkey.io/10.1287/msom.2023.0279?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormsom:v:27:y:2025:i:2:p:354-368. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.