IDEAS home Printed from https://ideas.repec.org/a/nat/nathum/v7y2023i9d10.1038_s41562-023-01659-w.html
   My bibliography  Save this article

Emergent analogical reasoning in large language models

Author

Listed:
  • Taylor Webb

    (University of California)

  • Keith J. Holyoak

    (University of California)

  • Hongjing Lu

    (University of California
    University of California)

Abstract

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Suggested Citation

  • Taylor Webb & Keith J. Holyoak & Hongjing Lu, 2023. "Emergent analogical reasoning in large language models," Nature Human Behaviour, Nature, vol. 7(9), pages 1526-1541, September.
  • Handle: RePEc:nat:nathum:v:7:y:2023:i:9:d:10.1038_s41562-023-01659-w
    DOI: 10.1038/s41562-023-01659-w
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41562-023-01659-w
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41562-023-01659-w?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yan Leng & Yuan Yuan, 2023. "Do LLM Agents Exhibit Social Behavior?," Papers 2312.15198, arXiv.org, revised Oct 2024.
    2. Jian-Qiao Zhu & Haijiang Yan & Thomas L. Griffiths, 2024. "Language Models Trained to do Arithmetic Predict Human Risky and Intertemporal Choice," Papers 2405.19313, arXiv.org.
    3. Siting Estee Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org, revised Oct 2024.
    4. Jeongbin Kim & Matthew Kovach & Kyu-Min Lee & Euncheol Shin & Hector Tzavellas, 2024. "Learning to be Homo Economicus: Can an LLM Learn Preferences from Choice," Papers 2401.07345, arXiv.org.
    5. Ahmad A. Toumeh, 2024. "Assessing the potential integration of large language models in accounting practices: evidence from an emerging economy," Future Business Journal, Springer, vol. 10(1), pages 1-15, December.
    6. James W. A. Strachan & Dalila Albergo & Giulia Borghini & Oriana Pansardi & Eugenio Scaliti & Saurabh Gupta & Krati Saxena & Alessandro Rufo & Stefano Panzeri & Guido Manzi & Michael S. A. Graziano & , 2024. "Testing theory of mind in large language models and humans," Nature Human Behaviour, Nature, vol. 8(7), pages 1285-1295, July.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nathum:v:7:y:2023:i:9:d:10.1038_s41562-023-01659-w. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.