IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0222907.html
   My bibliography  Save this article

A comparison of machine learning algorithms for the surveillance of autism spectrum disorder

Author

Listed:
  • Scott H Lee
  • Matthew J Maenner
  • Charles M Heilig

Abstract

Objective: The Centers for Disease Control and Prevention (CDC) coordinates a labor-intensive process to measure the prevalence of autism spectrum disorder (ASD) among children in the United States. Random forests methods have shown promise in speeding up this process, but they lag behind human classification accuracy by about 5%. We explore whether more recently available document classification algorithms can close this gap. Materials and methods: Using data gathered from a single surveillance site, we applied 8 supervised learning algorithms to predict whether children meet the case definition for ASD based solely on the words in their evaluations. We compared the algorithms’ performance across 10 random train-test splits of the data, using classification accuracy, F1 score, and number of positive calls to evaluate their potential use for surveillance. Results: Across the 10 train-test cycles, the random forest and support vector machine with Naive Bayes features (NB-SVM) each achieved slightly more than 87% mean accuracy. The NB-SVM produced significantly more false negatives than false positives (P = 0.027), but the random forest did not, making its prevalence estimates very close to the true prevalence in the data. The best-performing neural network performed similarly to the random forest on both measures. Discussion: The random forest performed as well as more recently available models like the NB-SVM and the neural network, and it also produced good prevalence estimates. NB-SVM may not be a good candidate for use in a fully-automated surveillance workflow due to increased false negatives. More sophisticated algorithms, like hierarchical convolutional neural networks, may not be feasible to train due to characteristics of the data. Current algorithms might perform better if the data are abstracted and processed differently and if they take into account information about the children in addition to their evaluations. Conclusion: Deep learning models performed similarly to traditional machine learning methods at predicting the clinician-assigned case status for CDC’s autism surveillance system. While deep learning methods had limited benefit in this task, they may have applications in other surveillance systems.

Suggested Citation

  • Scott H Lee & Matthew J Maenner & Charles M Heilig, 2019. "A comparison of machine learning algorithms for the surveillance of autism spectrum disorder," PLOS ONE, Public Library of Science, vol. 14(9), pages 1-11, September.
  • Handle: RePEc:plo:pone00:0222907
    DOI: 10.1371/journal.pone.0222907
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0222907
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0222907&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0222907?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Matthew J Maenner & Marshalyn Yeargin-Allsopp & Kim Van Naarden Braun & Deborah L Christensen & Laura A Schieve, 2016. "Development of a Machine Learning Algorithm for the Surveillance of Autism Spectrum Disorder," PLOS ONE, Public Library of Science, vol. 11(12), pages 1-11, December.
    2. Wendy Leisenring & Todd Alono & Margaret Sullivan Pepe, 2000. "Comparisons of Predictive Values of Binary Medical Diagnostic Tests for Paired Designs," Biometrics, The International Biometric Society, vol. 56(2), pages 345-351, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Angel M. Morales & Patrick Tarwater & Indika Mallawaarachchi & Alok Kumar Dwivedi & Juan B. Figueroa-Casas, 2015. "Multinomial logistic regression approach for the evaluation of binary diagnostic test in medical research," Statistics in Transition new series, Główny Urząd Statystyczny (Polska), vol. 16(2), pages 203-222, June.
    2. A. S. Hedayat & Junhui Wang & Tu Xu, 2015. "Minimum clinically important difference in medical studies," Biometrics, The International Biometric Society, vol. 71(1), pages 33-41, March.
    3. Xueyan Mei & Zelong Liu & Ayushi Singh & Marcia Lange & Priyanka Boddu & Jingqi Q. X. Gong & Justine Lee & Cody DeMarco & Chendi Cao & Samantha Platt & Ganesh Sivakumar & Benjamin Gross & Mingqian Hua, 2023. "Interstitial lung disease diagnosis and prognosis using an AI system integrating longitudinal data," Nature Communications, Nature, vol. 14(1), pages 1-11, December.
    4. Saad Bouh Regad & José Antonio Roldán-Nofuentes, 2021. "Global Hypothesis Test to Compare the Predictive Values of Diagnostic Tests Subject to a Case-Control Design," Mathematics, MDPI, vol. 9(6), pages 1-22, March.
    5. Alok Kumar Dwivedi & Indika Mallawaarachchi & Juan B. Figueroa-Casas & Angel M. Morales & Patrick Tarwater, 2015. "Multinomial Logistic Regression Approach For The Evaluation Of Binary Diagnostic Test In Medical Research," Statistics in Transition New Series, Polish Statistical Association, vol. 16(2), pages 203-222, June.
    6. Xingye Qiao & Yufeng Liu, 2009. "Adaptive Weighted Learning for Unbalanced Multicategory Classification," Biometrics, The International Biometric Society, vol. 65(1), pages 159-168, March.
    7. Robert H. Lyles & John M. Williamson & Hung-Mo Lin & Charles M. Heilig, 2005. "Extending McNemar's Test: Estimation and Inference When Paired Binary Outcome Data Are Misclassified," Biometrics, The International Biometric Society, vol. 61(1), pages 287-294, March.
    8. Dwivedi Alok Kumar & Mallawaarachchi Indika & Figueroa-Casas Juan B. & Morales Angel M. & Tarwater Patrick, 2015. "Multinomial Logistic Regression Approach for the Evaluation of Binary Diagnostic Test in Medical Research," Statistics in Transition New Series, Statistics Poland, vol. 16(2), pages 203-222, June.
    9. Peter H. Westfall & James F. Troendle & Gene Pennello, 2010. "Multiple McNemar Tests," Biometrics, The International Biometric Society, vol. 66(4), pages 1185-1191, December.
    10. José Antonio Roldán-Nofuentes & Saad Bouh Regad, 2021. "Confidence Intervals and Sample Size to Compare the Predictive Values of Two Diagnostic Tests," Mathematics, MDPI, vol. 9(13), pages 1-19, June.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0222907. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.