Author
Listed:
- Meng Wang
(Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR))
- Tian Lin
(Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong)
- Lianyu Wang
(Nanjing University of Aeronautics and Astronautics
Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics)
- Aidi Lin
(Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong)
- Ke Zou
(National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University)
- Xinxing Xu
(Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR))
- Yi Zhou
(Soochow University)
- Yuanyuan Peng
(Anhui Medical University)
- Qingquan Meng
(Soochow University)
- Yiming Qian
(Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR))
- Guoyao Deng
(National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University)
- Zhiqun Wu
(Longchuan People’s Hospital)
- Junhong Chen
(Puning People’s Hospital)
- Jianhong Lin
(Haifeng PengPai Memory Hospital)
- Mingzhi Zhang
(Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong)
- Weifang Zhu
(Soochow University)
- Changqing Zhang
(College of Intelligence and Computing, Tianjin University)
- Daoqiang Zhang
(Nanjing University of Aeronautics and Astronautics
Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics)
- Rick Siow Mong Goh
(Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR))
- Yong Liu
(Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR))
- Chi Pui Pang
(Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong
The Chinese University of Hong Kong)
- Xinjian Chen
(Soochow University
Soochow University)
- Haoyu Chen
(Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong)
- Huazhu Fu
(Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR))
Abstract
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies. We establish an uncertainty-inspired open set (UIOS) model, which is trained with fundus images of 9 retinal conditions. Besides assessing the probability of each category, UIOS also calculates an uncertainty score to express its confidence. Our UIOS model with thresholding strategy achieves an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set, external target categories (TC)-JSIEC dataset and TC-unseen testing set, respectively, compared to the F1 score of 92.20%, 80.69% and 64.74% by the standard AI model. Furthermore, UIOS correctly predicts high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images. UIOS provides a robust method for real-world screening of retinal anomalies.
Suggested Citation
Meng Wang & Tian Lin & Lianyu Wang & Aidi Lin & Ke Zou & Xinxing Xu & Yi Zhou & Yuanyuan Peng & Qingquan Meng & Yiming Qian & Guoyao Deng & Zhiqun Wu & Junhong Chen & Jianhong Lin & Mingzhi Zhang & We, 2023.
"Uncertainty-inspired open set learning for retinal anomaly identification,"
Nature Communications, Nature, vol. 14(1), pages 1-11, December.
Handle:
RePEc:nat:natcom:v:14:y:2023:i:1:d:10.1038_s41467-023-42444-7
DOI: 10.1038/s41467-023-42444-7
Download full text from publisher
References listed on IDEAS
- Ling-Ping Cen & Jie Ji & Jian-Wei Lin & Si-Tong Ju & Hong-Jie Lin & Tai-Ping Li & Yun Wang & Jian-Feng Yang & Yu-Fen Liu & Shaoying Tan & Li Tan & Dongjie Li & Yifan Wang & Dezhi Zheng & Yongqun Xiong, 2021.
"Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks,"
Nature Communications, Nature, vol. 12(1), pages 1-13, December.
- Henrik Olsson & Kimmo Kartasalo & Nita Mulliqi & Marco Capuccini & Pekka Ruusuvuori & Hemamali Samaratunga & Brett Delahunt & Cecilia Lindskog & Emiel A. M. Janssen & Anders Blilie & Lars Egevad & Ola, 2022.
"Estimating diagnostic uncertainty in artificial intelligence assisted pathology using conformal prediction,"
Nature Communications, Nature, vol. 13(1), pages 1-10, December.
Full references (including those not matched with items on IDEAS)
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.
- Sachin Panchal & Ankita Naik & Manesh Kokare & Samiksha Pachade & Rushikesh Naigaonkar & Prerana Phadnis & Archana Bhange, 2023.
"Retinal Fundus Multi-Disease Image Dataset (RFMiD) 2.0: A Dataset of Frequently and Rarely Identified Diseases,"
Data, MDPI, vol. 8(2), pages 1-16, January.
- Weimin Tan & Qiaoling Wei & Zhen Xing & Hao Fu & Hongyu Kong & Yi Lu & Bo Yan & Chen Zhao, 2024.
"Fairer AI in ophthalmology via implicit fairness learning for mitigating sexism and ageism,"
Nature Communications, Nature, vol. 15(1), pages 1-13, December.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:14:y:2023:i:1:d:10.1038_s41467-023-42444-7. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.