IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v634y2024i8032d10.1038_s41586-024-07930-y.html
   My bibliography  Save this article

Larger and more instructable language models become less reliable

Author

Listed:
  • Lexin Zhou

    (Universitat Politècnica de València
    University of Cambridge)

  • Wout Schellaert

    (Universitat Politècnica de València
    University of Cambridge)

  • Fernando Martínez-Plumed

    (Universitat Politècnica de València
    ValGRAI)

  • Yael Moros-Daval

    (Universitat Politècnica de València)

  • Cèsar Ferri

    (Universitat Politècnica de València
    ValGRAI)

  • José Hernández-Orallo

    (Universitat Politècnica de València
    University of Cambridge
    ValGRAI)

Abstract

The prevailing methods to make large language models more powerful and amenable have been based on continuous scaling up (that is, increasing their size, data volume and computational resources1) and bespoke shaping up (including post-filtering2,3, fine tuning or use of human feedback4,5). However, larger and more instructable large language models may have become less reliable. By studying the relationship between difficulty concordance, task avoidance and prompting stability of several language model families, here we show that easy instances for human participants are also easy for the models, but scaled-up, shaped-up models do not secure areas of low difficulty in which either the model does not err or human supervision can spot the errors. We also find that early models often avoid user questions but scaled-up, shaped-up models tend to give an apparently sensible yet wrong answer much more often, including errors on difficult questions that human supervisors frequently overlook. Moreover, we observe that stability to different natural phrasings of the same question is improved by scaling-up and shaping-up interventions, but pockets of variability persist across difficulty levels. These findings highlight the need for a fundamental shift in the design and development of general-purpose artificial intelligence, particularly in high-stakes areas for which a predictable distribution of errors is paramount.

Suggested Citation

  • Lexin Zhou & Wout Schellaert & Fernando Martínez-Plumed & Yael Moros-Daval & Cèsar Ferri & José Hernández-Orallo, 2024. "Larger and more instructable language models become less reliable," Nature, Nature, vol. 634(8032), pages 61-68, October.
  • Handle: RePEc:nat:nature:v:634:y:2024:i:8032:d:10.1038_s41586-024-07930-y
    DOI: 10.1038/s41586-024-07930-y
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-024-07930-y
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-024-07930-y?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:634:y:2024:i:8032:d:10.1038_s41586-024-07930-y. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.