IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v15y2024i1d10.1038_s41467-024-51749-0.html
   My bibliography  Save this article

Enhancing representation in radiography-reports foundation model: a granular alignment algorithm using masked contrastive learning

Author

Listed:
  • Weijian Huang

    (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
    Pengcheng Laboratory
    University of Chinese Academy of Sciences)

  • Cheng Li

    (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences)

  • Hong-Yu Zhou

    (Harvard Medical University)

  • Hao Yang

    (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
    Pengcheng Laboratory
    University of Chinese Academy of Sciences)

  • Jiarun Liu

    (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
    Pengcheng Laboratory
    University of Chinese Academy of Sciences)

  • Yong Liang

    (Pengcheng Laboratory)

  • Hairong Zheng

    (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences)

  • Shaoting Zhang

    (Qingyuan Research Institute, Shanghai Jiao Tong University)

  • Shanshan Wang

    (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences)

Abstract

Recently, multi-modal vision-language foundation models have gained significant attention in the medical field. While these models offer great opportunities, they still face crucial challenges, such as the requirement for fine-grained knowledge understanding in computer-aided diagnosis and the capability of utilizing very limited or even no task-specific labeled data in real-world clinical applications. In this study, we present MaCo, a masked contrastive chest X-ray foundation model that tackles these challenges. MaCo explores masked contrastive learning to simultaneously achieve fine-grained image understanding and zero-shot learning for a variety of medical imaging tasks. It designs a correlation weighting mechanism to adjust the correlation between masked chest X-ray image patches and their corresponding reports, thereby enhancing the model’s representation learning capabilities. To evaluate the performance of MaCo, we conducted extensive experiments using 6 well-known open-source X-ray datasets. The experimental results demonstrate the superiority of MaCo over 10 state-of-the-art approaches across tasks such as classification, segmentation, detection, and phrase grounding. These findings highlight the significant potential of MaCo in advancing a wide range of medical image analysis tasks.

Suggested Citation

  • Weijian Huang & Cheng Li & Hong-Yu Zhou & Hao Yang & Jiarun Liu & Yong Liang & Hairong Zheng & Shaoting Zhang & Shanshan Wang, 2024. "Enhancing representation in radiography-reports foundation model: a granular alignment algorithm using masked contrastive learning," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
  • Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-51749-0
    DOI: 10.1038/s41467-024-51749-0
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-024-51749-0
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-024-51749-0?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Qi Chang & Zhennan Yan & Mu Zhou & Hui Qu & Xiaoxiao He & Han Zhang & Lohendran Baskaran & Subhi Al’Aref & Hongsheng Li & Shaoting Zhang & Dimitris N. Metaxas, 2023. "Mining multi-center heterogeneous medical data with distributed synthetic learning," Nature Communications, Nature, vol. 14(1), pages 1-16, December.
    2. Michael Moor & Oishi Banerjee & Zahra Shakeri Hossein Abad & Harlan M. Krumholz & Jure Leskovec & Eric J. Topol & Pranav Rajpurkar, 2023. "Foundation models for generalist medical artificial intelligence," Nature, Nature, vol. 616(7956), pages 259-265, April.
    3. Yukun Zhou & Mark A. Chia & Siegfried K. Wagner & Murat S. Ayhan & Dominic J. Williamson & Robbert R. Struyven & Timing Liu & Moucheng Xu & Mateo G. Lozano & Peter Woodward-Court & Yuka Kihara & Andre, 2023. "A foundation model for generalizable disease detection from retinal images," Nature, Nature, vol. 622(7981), pages 156-163, October.
    4. Xiaoman Zhang & Chaoyi Wu & Ya Zhang & Weidi Xie & Yanfeng Wang, 2023. "Knowledge-enhanced visual-language pre-training on chest radiology images," Nature Communications, Nature, vol. 14(1), pages 1-12, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Maksim Makarenko & Arturo Burguete-Lopez & Qizhou Wang & Silvio Giancola & Bernard Ghanem & Luca Passone & Andrea Fratalocchi, 2024. "Hardware-accelerated integrated optoelectronic platform towards real-time high-resolution hyperspectral video understanding," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    2. Thiers, Fabio A. & Lucy, Kimberly, 2024. "A Distinct Approach to Clinical GenAI Oversight," OSF Preprints vm6zy, Center for Open Science.
    3. Junwei Cheng & Chaoran Huang & Jialong Zhang & Bo Wu & Wenkai Zhang & Xinyu Liu & Jiahui Zhang & Yiyi Tang & Hailong Zhou & Qiming Zhang & Min Gu & Jianji Dong & Xinliang Zhang, 2024. "Multimodal deep learning using on-chip diffractive optics with in situ training capability," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    4. Soroosh Tayebi Arasteh & Tianyu Han & Mahshad Lotfinia & Christiane Kuhl & Jakob Nikolas Kather & Daniel Truhn & Sven Nebelung, 2024. "Large language models streamline automated machine learning for clinical studies," Nature Communications, Nature, vol. 15(1), pages 1-12, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-51749-0. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.