IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v616y2023i7956d10.1038_s41586-023-05881-4.html
   My bibliography  Save this article

Foundation models for generalist medical artificial intelligence

Author

Listed:
  • Michael Moor

    (Stanford University)

  • Oishi Banerjee

    (Harvard University)

  • Zahra Shakeri Hossein Abad

    (University of Toronto)

  • Harlan M. Krumholz

    (Yale University School of Medicine, Center for Outcomes Research and Evaluation, Yale New Haven Hospital)

  • Jure Leskovec

    (Stanford University)

  • Eric J. Topol

    (Scripps Research Translational Institute)

  • Pranav Rajpurkar

    (Harvard University)

Abstract

The exceptionally rapid development of highly flexible, reusable artificial intelligence (AI) models is likely to usher in newfound capabilities in medicine. We propose a new paradigm for medical AI, which we refer to as generalist medical AI (GMAI). GMAI models will be capable of carrying out a diverse set of tasks using very little or no task-specific labelled data. Built through self-supervision on large, diverse datasets, GMAI will flexibly interpret different combinations of medical modalities, including data from imaging, electronic health records, laboratory results, genomics, graphs or medical text. Models will in turn produce expressive outputs such as free-text explanations, spoken recommendations or image annotations that demonstrate advanced medical reasoning abilities. Here we identify a set of high-impact potential applications for GMAI and lay out specific technical capabilities and training datasets necessary to enable them. We expect that GMAI-enabled applications will challenge current strategies for regulating and validating AI devices for medicine and will shift practices associated with the collection of large medical datasets.

Suggested Citation

  • Michael Moor & Oishi Banerjee & Zahra Shakeri Hossein Abad & Harlan M. Krumholz & Jure Leskovec & Eric J. Topol & Pranav Rajpurkar, 2023. "Foundation models for generalist medical artificial intelligence," Nature, Nature, vol. 616(7956), pages 259-265, April.
  • Handle: RePEc:nat:nature:v:616:y:2023:i:7956:d:10.1038_s41586-023-05881-4
    DOI: 10.1038/s41586-023-05881-4
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-023-05881-4
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-023-05881-4?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Maksim Makarenko & Arturo Burguete-Lopez & Qizhou Wang & Silvio Giancola & Bernard Ghanem & Luca Passone & Andrea Fratalocchi, 2024. "Hardware-accelerated integrated optoelectronic platform towards real-time high-resolution hyperspectral video understanding," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    2. Soroosh Tayebi Arasteh & Tianyu Han & Mahshad Lotfinia & Christiane Kuhl & Jakob Nikolas Kather & Daniel Truhn & Sven Nebelung, 2024. "Large language models streamline automated machine learning for clinical studies," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    3. Pengcheng Qiu & Chaoyi Wu & Xiaoman Zhang & Weixiong Lin & Haicheng Wang & Ya Zhang & Yanfeng Wang & Weidi Xie, 2024. "Towards building multilingual language model for medicine," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    4. Weijian Huang & Cheng Li & Hong-Yu Zhou & Hao Yang & Jiarun Liu & Yong Liang & Hairong Zheng & Shaoting Zhang & Shanshan Wang, 2024. "Enhancing representation in radiography-reports foundation model: a granular alignment algorithm using masked contrastive learning," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    5. Junwei Cheng & Chaoran Huang & Jialong Zhang & Bo Wu & Wenkai Zhang & Xinyu Liu & Jiahui Zhang & Yiyi Tang & Hailong Zhou & Qiming Zhang & Min Gu & Jianji Dong & Xinliang Zhang, 2024. "Multimodal deep learning using on-chip diffractive optics with in situ training capability," Nature Communications, Nature, vol. 15(1), pages 1-10, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:616:y:2023:i:7956:d:10.1038_s41586-023-05881-4. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.