IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v15y2024i1d10.1038_s41467-024-44824-z.html
   My bibliography  Save this article

Segment anything in medical images

Author

Listed:
  • Jun Ma

    (University Health Network
    University of Toronto
    Vector Institute)

  • Yuting He

    (Western University)

  • Feifei Li

    (University Health Network)

  • Lin Han

    (New York University)

  • Chenyu You

    (Yale University)

  • Bo Wang

    (University Health Network
    University of Toronto
    Vector Institute
    University of Toronto)

Abstract

Medical image segmentation is a critical component in clinical practice, facilitating accurate diagnosis, treatment planning, and disease monitoring. However, existing methods, often tailored to specific modalities or disease types, lack generalizability across the diverse spectrum of medical image segmentation tasks. Here we present MedSAM, a foundation model designed for bridging this gap by enabling universal medical image segmentation. The model is developed on a large-scale medical image dataset with 1,570,263 image-mask pairs, covering 10 imaging modalities and over 30 cancer types. We conduct a comprehensive evaluation on 86 internal validation tasks and 60 external validation tasks, demonstrating better accuracy and robustness than modality-wise specialist models. By delivering accurate and efficient segmentation across a wide spectrum of tasks, MedSAM holds significant potential to expedite the evolution of diagnostic tools and the personalization of treatment plans.

Suggested Citation

  • Jun Ma & Yuting He & Feifei Li & Lin Han & Chenyu You & Bo Wang, 2024. "Segment anything in medical images," Nature Communications, Nature, vol. 15(1), pages 1-9, December.
  • Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-44824-z
    DOI: 10.1038/s41467-024-44824-z
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-024-44824-z
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-024-44824-z?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Bryan He & Alan C. Kwan & Jae Hyung Cho & Neal Yuan & Charles Pollick & Takahiro Shiota & Joseph Ebinger & Natalie A. Bello & Janet Wei & Kiranbir Josan & Grant Duffy & Melvin Jujjavarapu & Robert Sie, 2023. "Blinded, randomized trial of sonographer versus AI cardiac function assessment," Nature, Nature, vol. 616(7957), pages 520-524, April.
    2. Michela Antonelli & Annika Reinke & Spyridon Bakas & Keyvan Farahani & Annette Kopp-Schneider & Bennett A. Landman & Geert Litjens & Bjoern Menze & Olaf Ronneberger & Ronald M. Summers & Bram Ginneken, 2022. "The Medical Segmentation Decathlon," Nature Communications, Nature, vol. 13(1), pages 1-13, December.
    3. David Ouyang & Bryan He & Amirata Ghorbani & Neal Yuan & Joseph Ebinger & Curtis P. Langlotz & Paul A. Heidenreich & Robert A. Harrington & David H. Liang & Euan A. Ashley & James Y. Zou, 2020. "Video-based AI for beat-to-beat assessment of cardiac function," Nature, Nature, vol. 580(7802), pages 252-256, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Erik Cuevas & Alberto Luque & Fernando Vega & Daniel Zaldívar & Jesús López, 2024. "Social influence dynamics for image segmentation: a novel pixel interaction approach," Journal of Computational Social Science, Springer, vol. 7(3), pages 2613-2642, December.
    2. Oded Rotem & Tamar Schwartz & Ron Maor & Yishay Tauber & Maya Tsarfati Shapiro & Marcos Meseguer & Daniella Gilboa & Daniel S. Seidman & Assaf Zaritsky, 2024. "Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization," Nature Communications, Nature, vol. 15(1), pages 1-19, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Emmons, Karen M. & Mendez, Samuel & Lee, Rebekka M. & Erani, Diana & Mascioli, Lynette & Abreu, Marlene & Adams, Susan & Daly, James & Bierer, Barbara E., 2023. "Data sharing in the context of community-engaged research partnerships," Social Science & Medicine, Elsevier, vol. 325(C).
    2. Li, Shengxiao (Alex), 2023. "Revisiting the relationship between information and communication technologies and travel behavior: An investigation of older Americans," Transportation Research Part A: Policy and Practice, Elsevier, vol. 172(C).
    3. Maclean, Johanna Catherine & Tello-Trillo, Sebastian & Webber, Douglas, 2023. "Losing insurance and psychiatric hospitalizations," Journal of Economic Behavior & Organization, Elsevier, vol. 205(C), pages 508-527.
    4. Elizaveta Sivak & Paulina Pankowska & Adriënne Mendrik & Tom Emery & Javier Garcia-Bernardo & Seyit Höcük & Kasia Karpinska & Angelica Maineri & Joris Mulder & Malvina Nissim & Gert Stulp, 2024. "Combining the strengths of Dutch survey and register data in a data challenge to predict fertility (PreFer)," Journal of Computational Social Science, Springer, vol. 7(2), pages 1403-1431, October.
    5. Martin Obschonka & Moren Levesque, 2024. "A Market for Lemons? Strategic Directions for a Vigilant Application of Artificial Intelligence in Entrepreneurship Research," Papers 2409.08890, arXiv.org.
    6. Md Tauhidul Islam & Lei Xing, 2023. "Cartography of Genomic Interactions Enables Deep Analysis of Single-Cell Expression Data," Nature Communications, Nature, vol. 14(1), pages 1-17, December.
    7. Md Tauhidul Islam & Zixia Zhou & Hongyi Ren & Masoud Badiei Khuzani & Daniel Kapp & James Zou & Lu Tian & Joseph C. Liao & Lei Xing, 2023. "Revealing hidden patterns in deep neural network feature space continuum via manifold learning," Nature Communications, Nature, vol. 14(1), pages 1-20, December.
    8. Jasper Tromp & David Bauer & Brian L. Claggett & Matthew Frost & Mathias Bøtcher Iversen & Narayana Prasad & Mark C. Petrie & Martin G. Larson & Justin A. Ezekowitz & Scott D. Solomon, 2022. "A formal validation of a deep learning-based automated workflow for the interpretation of the echocardiogram," Nature Communications, Nature, vol. 13(1), pages 1-9, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-44824-z. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.