IDEAS home Printed from https://ideas.repec.org/a/kap/hcarem/v25y2022i4d10.1007_s10729-022-09605-4.html
   My bibliography  Save this article

Aiding the prescriber: developing a machine learning approach to personalized risk modeling for chronic opioid therapy amongst US Army soldiers

Author

Listed:
  • Margrét Vilborg Bjarnadóttir

    (University of Maryland)

  • David B. Anderson

    (Villanova University)

  • Ritu Agarwal

    (University of Maryland
    Johns Hopkins University)

  • D. Alan Nelson

    (Stanford University)

Abstract

The opioid epidemic is a major policy concern. The widespread availability of opioids, which is fueled by physician prescribing patterns, medication diversion, and the interaction with potential illicit opioid use, has been implicated as proximal cause for subsequent opioid dependence and mortality. Risk indicators related to chronic opioid therapy (COT) at the point of care may influence physicians’ prescribing decisions, potentially reducing rates of dependency and abuse. In this paper, we investigate the performance of machine learning algorithms for predicting the risk of COT. Using data on over 12 million observations of active duty US Army soldiers, we apply machine learning models to predict the risk of COT in the initial months of prescription. We use the area under the curve (AUC) as an overall measure of model performance, and we focus on the positive predictive value (PPV), which reflects the models’ ability to accurately target military members for intervention. Of the many models tested, AUC ranges between 0.83 and 0.87. When we focus on the top 1% of members at highest risk, we observe a PPV value of 8.4% and 20.3% for months 1 and 3, respectively. We further investigate the performance of sparse models that can be implemented in sparse data environments. We find that when the goal is to identify patients at the highest risk of chronic use, these sparse linear models achieve a performance similar to models trained on hundreds of variables. Our predictive models exhibit high accuracy and can alert prescribers to the risk of COT for the highest risk patients. Optimized sparse models identify a parsimonious set of factors to predict COT: initial supply of opioids, the supply of opioids in the month being studied, and the number of prescriptions for psychotropic medications. Future research should investigate the possible effects of these tools on prescriber behavior (e.g., the benefit of clinician nudging at the point of care in outpatient settings).

Suggested Citation

  • Margrét Vilborg Bjarnadóttir & David B. Anderson & Ritu Agarwal & D. Alan Nelson, 2022. "Aiding the prescriber: developing a machine learning approach to personalized risk modeling for chronic opioid therapy amongst US Army soldiers," Health Care Management Science, Springer, vol. 25(4), pages 649-665, December.
  • Handle: RePEc:kap:hcarem:v:25:y:2022:i:4:d:10.1007_s10729-022-09605-4
    DOI: 10.1007/s10729-022-09605-4
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10729-022-09605-4
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10729-022-09605-4?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Jiaming Zeng & Berk Ustun & Cynthia Rudin, 2017. "Interpretable classification models for recidivism prediction," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 180(3), pages 689-722, June.
    2. Justine S. Hastings & Mark Howison & Sarah E. Inman, 2020. "Predicting high-risk opioid prescriptions before they are given," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 117(4), pages 1917-1923, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Margaret L. Brandeau, 2023. "Responding to the US opioid crisis: leveraging analytics to support decision making," Health Care Management Science, Springer, vol. 26(4), pages 599-603, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shan Huang & Michael Allan Ribers & Hannes Ullrich, 2021. "The Value of Data for Prediction Policy Problems: Evidence from Antibiotic Prescribing," Discussion Papers of DIW Berlin 1939, DIW Berlin, German Institute for Economic Research.
    2. Michael A. Ribers & Hannes Ullrich, 2019. "Battling Antibiotic Resistance: Can Machine Learning Improve Prescribing?," Discussion Papers of DIW Berlin 1803, DIW Berlin, German Institute for Economic Research.
    3. Shams Mehdi & Pratyush Tiwary, 2024. "Thermodynamics-inspired explanations of artificial intelligence," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    4. Huang, Shan & Ribers, Michael Allan & Ullrich, Hannes, 2022. "Assessing the value of data for prediction policies: The case of antibiotic prescribing," Economics Letters, Elsevier, vol. 213(C).
    5. Xiaochen Hu & Xudong Zhang & Nicholas Lovrich, 2021. "Public perceptions of police behavior during traffic stops: logistic regression and machine learning approaches compared," Journal of Computational Social Science, Springer, vol. 4(1), pages 355-380, May.
    6. Wei-Hsuan Lo-Ciganic & James L Huang & Hao H Zhang & Jeremy C Weiss & C Kent Kwoh & Julie M Donohue & Adam J Gordon & Gerald Cochran & Daniel C Malone & Courtney C Kuza & Walid F Gellad, 2020. "Using machine learning to predict risk of incident opioid use disorder among fee-for-service Medicare beneficiaries: A prognostic study," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-16, July.
    7. Emilio Carrizosa & Cristina Molero-Río & Dolores Romero Morales, 2021. "Mathematical optimization in classification and regression trees," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 29(1), pages 5-33, April.
    8. Michael Allan Ribers & Hannes Ullrich, 2020. "Machine Predictions and Human Decisions with Variation in Payoffs and Skill," CESifo Working Paper Series 8702, CESifo.
    9. Beau Coker & Cynthia Rudin & Gary King, 2021. "A Theory of Statistical Inference for Ensuring the Robustness of Scientific Results," Management Science, INFORMS, vol. 67(10), pages 6174-6197, October.
    10. Dragos Florin Ciocan & Velibor V. Mišić, 2022. "Interpretable Optimal Stopping," Management Science, INFORMS, vol. 68(3), pages 1616-1638, March.
    11. Kai Feng & Han Hong & Ke Tang & Jingyuan Wang, 2023. "Statistical Tests for Replacing Human Decision Makers with Algorithms," Papers 2306.11689, arXiv.org, revised Dec 2024.
    12. Toru Kitagawa & Shosei Sakaguchi & Aleksey Tetenov, 2021. "Constrained Classification and Policy Learning," Papers 2106.12886, arXiv.org, revised Jul 2023.
    13. Michael Allan Ribers & Hannes Ullrich, 2023. "Machine learning and physician prescribing: a path to reduced antibiotic use," Berlin School of Economics Discussion Papers 0019, Berlin School of Economics.
    14. Cynthia Rudin & Berk Ustun, 2018. "Optimized Scoring Systems: Toward Trust in Machine Learning for Healthcare and Criminal Justice," Interfaces, INFORMS, vol. 48(5), pages 449-466, October.
    15. Kristian Lum & David B. Dunson & James Johndrow, 2022. "Closer than they appear: A Bayesian perspective on individual‐level heterogeneity in risk assessment," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 185(2), pages 588-614, April.
    16. Jon Kleinberg & Sendhil Mullainathan, 2019. "Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability," NBER Working Papers 25854, National Bureau of Economic Research, Inc.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:kap:hcarem:v:25:y:2022:i:4:d:10.1007_s10729-022-09605-4. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.