IDEAS home Printed from https://ideas.repec.org/a/taf/jnlasa/v115y2020i530p692-706.html
   My bibliography  Save this article

Estimating Dynamic Treatment Regimes in Mobile Health Using V-Learning

Author

Listed:
  • Daniel J. Luckett
  • Eric B. Laber
  • Anna R. Kahkoska
  • David M. Maahs
  • Elizabeth Mayer-Davis
  • Michael R. Kosorok

Abstract

The vision for precision medicine is to use individual patient characteristics to inform a personalized treatment plan that leads to the best possible healthcare for each patient. Mobile technologies have an important role to play in this vision as they offer a means to monitor a patient’s health status in real-time and subsequently to deliver interventions if, when, and in the dose that they are needed. Dynamic treatment regimes formalize individualized treatment plans as sequences of decision rules, one per stage of clinical intervention, that map current patient information to a recommended treatment. However, most existing methods for estimating optimal dynamic treatment regimes are designed for a small number of fixed decision points occurring on a coarse time-scale. We propose a new reinforcement learning method for estimating an optimal treatment regime that is applicable to data collected using mobile technologies in an outpatient setting. The proposed method accommodates an indefinite time horizon and minute-by-minute decision making that are common in mobile health applications. We show that the proposed estimators are consistent and asymptotically normal under mild conditions. The proposed methods are applied to estimate an optimal dynamic treatment regime for controlling blood glucose levels in patients with type 1 diabetes.

Suggested Citation

  • Daniel J. Luckett & Eric B. Laber & Anna R. Kahkoska & David M. Maahs & Elizabeth Mayer-Davis & Michael R. Kosorok, 2020. "Estimating Dynamic Treatment Regimes in Mobile Health Using V-Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(530), pages 692-706, April.
  • Handle: RePEc:taf:jnlasa:v:115:y:2020:i:530:p:692-706
    DOI: 10.1080/01621459.2018.1537919
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/01621459.2018.1537919
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/01621459.2018.1537919?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Shi, Chengchun & Luo, Shikai & Le, Yuan & Zhu, Hongtu & Song, Rui, 2022. "Statistically efficient advantage learning for offline reinforcement learning in infinite horizons," LSE Research Online Documents on Economics 115598, London School of Economics and Political Science, LSE Library.
    2. Gao, Yuhe & Shi, Chengchun & Song, Rui, 2023. "Deep spectral Q-learning with application to mobile health," LSE Research Online Documents on Economics 119445, London School of Economics and Political Science, LSE Library.
    3. Zhang, Yingying & Shi, Chengchun & Luo, Shikai, 2023. "Conformal off-policy prediction," LSE Research Online Documents on Economics 118250, London School of Economics and Political Science, LSE Library.
    4. Yuqian Zhang & Weijie Ji & Jelena Bradic, 2021. "Dynamic treatment effects: high-dimensional inference under model misspecification," Papers 2111.06818, arXiv.org, revised Jun 2023.
    5. Zhen Li & Jie Chen & Eric Laber & Fang Liu & Richard Baumgartner, 2023. "Optimal Treatment Regimes: A Review and Empirical Comparison," International Statistical Review, International Statistical Institute, vol. 91(3), pages 427-463, December.
    6. Ke Sun & Linglong Kong & Hongtu Zhu & Chengchun Shi, 2024. "Optimal Treatment Allocation Strategies for A/B Testing in Partially Observable Time Series Experiments," Papers 2408.05342, arXiv.org, revised Oct 2024.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:jnlasa:v:115:y:2020:i:530:p:692-706. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/UASA20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.