IDEAS home Printed from https://ideas.repec.org/a/inm/orijoc/v32y4i2020p877-894.html
   My bibliography  Save this article

An Approximation Approach for Response-Adaptive Clinical Trial Design

Author

Listed:
  • Vishal Ahuja

    (Cox School of Business, Southern Methodist University, Dallas, Texas 75275;)

  • John R. Birge

    (Booth School of Business, University of Chicago, Chicago, Illinois 60637)

Abstract

Multiarmed bandit (MAB) problems, typically modeled as Markov decision processes (MDPs), exemplify the learning versus earning trade-off. An area that has motivated theoretical research in MAB designs is the study of clinical trials, where the application of such designs has the potential to significantly improve patient outcomes. However, for many practical problems of interest, the state space is intractably large, rendering exact approaches to solving MDPs impractical. In particular, settings that require multiple simultaneous allocations lead to an expanded state and action-outcome space, necessitating the use of approximation approaches. We propose a novel approximation approach that combines the strengths of multiple methods: grid-based state discretization, value function approximation methods, and techniques for a computationally efficient implementation. The hallmark of our approach is the accurate approximation of the value function that combines linear interpolation with bounds on interpolated value and the addition of a learning component to the objective function. Computational analysis on relevant datasets shows that our approach outperforms existing heuristics (e.g., greedy and upper confidence bound family of algorithms) and a popular Lagrangian-based approximation method, where we find that the average regret improves by up to 58.3%. A retrospective implementation on a recently conducted phase 3 clinical trial shows that our design could have reduced the number of failures by 17% relative to the randomized control design used in that trial. Our proposed approach makes it practically feasible for trial administrators and regulators to implement Bayesian response-adaptive designs on large clinical trials with potential significant gains.

Suggested Citation

  • Vishal Ahuja & John R. Birge, 2020. "An Approximation Approach for Response-Adaptive Clinical Trial Design," INFORMS Journal on Computing, INFORMS, vol. 32(4), pages 877-894, October.
  • Handle: RePEc:inm:orijoc:v:32:y:4:i:2020:p:877-894
    DOI: 10.1287/ijoc.2020.0969
    as

    Download full text from publisher

    File URL: https://doi.org/10.1287/ijoc.2020.0969
    Download Restriction: no

    File URL: https://libkey.io/10.1287/ijoc.2020.0969?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. William S. Lovejoy, 1991. "Computationally Feasible Bounds for Partially Observed Markov Decision Processes," Operations Research, INFORMS, vol. 39(1), pages 162-175, February.
    2. Kenneth F Schulz & Douglas G Altman & David Moher & for the CONSORT Group, 2010. "CONSORT 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomised Trials," PLOS Medicine, Public Library of Science, vol. 7(3), pages 1-7, March.
    3. DiMasi, Joseph A. & Grabowski, Henry G. & Hansen, Ronald W., 2016. "Innovation in the pharmaceutical industry: New estimates of R&D costs," Journal of Health Economics, Elsevier, vol. 47(C), pages 20-33.
    4. Christos H. Papadimitriou & John N. Tsitsiklis, 1999. "The Complexity of Optimal Queuing Network Control," Mathematics of Operations Research, INFORMS, vol. 24(2), pages 293-305, May.
    5. Steffen Ventz & Lorenzo Trippa, 2015. "Bayesian designs and the control of frequentist characteristics: A practical solution," Biometrics, The International Biometric Society, vol. 71(1), pages 218-226, March.
    6. Richard D. Smallwood & Edward J. Sondik, 1973. "The Optimal Control of Partially Observable Markov Processes over a Finite Horizon," Operations Research, INFORMS, vol. 21(5), pages 1071-1088, October.
    7. David B. Brown & James E. Smith, 2011. "Dynamic Portfolio Optimization with Transaction Costs: Heuristics and Dual Bounds," Management Science, INFORMS, vol. 57(10), pages 1752-1770, October.
    8. Guosheng Yin & Nan Chen & J. Jack Lee, 2012. "Phase II trial design with Bayesian adaptive randomization and predictive probability," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 61(2), pages 219-235, March.
    9. Felipe Caro & Jérémie Gallien, 2007. "Dynamic Assortment with Demand Learning for Seasonal Consumer Goods," Management Science, INFORMS, vol. 53(2), pages 276-292, February.
    10. Yossi Aviv & Amit Pazgal, 2005. "A Partially Observed Markov Decision Process for Dynamic Pricing," Management Science, INFORMS, vol. 51(9), pages 1400-1416, September.
    11. Dimitris Bertsimas & Adam J. Mersereau, 2007. "A Learning Approach for Interactive Marketing to a Customer Segment," Operations Research, INFORMS, vol. 55(6), pages 1120-1135, December.
    12. George E. Monahan, 1982. "State of the Art---A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms," Management Science, INFORMS, vol. 28(1), pages 1-16, January.
    13. Ahuja, Vishal & Birge, John R., 2016. "Response-adaptive designs for clinical trials: Simultaneous learning from multiple patients," European Journal of Operational Research, Elsevier, vol. 248(2), pages 619-633.
    14. D. P. de Farias & B. Van Roy, 2003. "The Linear Programming Approach to Approximate Dynamic Programming," Operations Research, INFORMS, vol. 51(6), pages 850-865, December.
    15. Daniel Adelman & Adam J. Mersereau, 2008. "Relaxations of Weakly Coupled Stochastic Dynamic Programs," Operations Research, INFORMS, vol. 56(3), pages 712-727, June.
    16. Stephen Chick & Martin Forster & Paolo Pertile, 2017. "A Bayesian decision theoretic model of sequential experimentation with delayed response," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(5), pages 1439-1462, November.
    17. Michael N. Katehakis & Arthur F. Veinott, 1987. "The Multi-Armed Bandit Problem: Decomposition and Computation," Mathematics of Operations Research, INFORMS, vol. 12(2), pages 262-268, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hossein Kamalzadeh & Vishal Ahuja & Michael Hahsler & Michael E. Bowen, 2021. "An Analytics‐Driven Approach for Optimal Individualized Diabetes Screening," Production and Operations Management, Production and Operations Management Society, vol. 30(9), pages 3161-3191, September.
    2. Satic, U. & Jacko, P. & Kirkbride, C., 2024. "A simulation-based approximate dynamic programming approach to dynamic and stochastic resource-constrained multi-project scheduling problem," European Journal of Operational Research, Elsevier, vol. 315(2), pages 454-469.
    3. Williamson, S. Faye & Jacko, Peter & Jaki, Thomas, 2022. "Generalisations of a Bayesian decision-theoretic randomisation procedure and the impact of delayed responses," Computational Statistics & Data Analysis, Elsevier, vol. 174(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Saghafian, Soroush, 2018. "Ambiguous partially observable Markov decision processes: Structural results and applications," Journal of Economic Theory, Elsevier, vol. 178(C), pages 1-35.
    2. Ahuja, Vishal & Birge, John R., 2016. "Response-adaptive designs for clinical trials: Simultaneous learning from multiple patients," European Journal of Operational Research, Elsevier, vol. 248(2), pages 619-633.
    3. Santiago R. Balseiro & David B. Brown & Chen Chen, 2021. "Dynamic Pricing of Relocating Resources in Large Networks," Management Science, INFORMS, vol. 67(7), pages 4075-4094, July.
    4. Chiel van Oosterom & Lisa M. Maillart & Jeffrey P. Kharoufeh, 2017. "Optimal maintenance policies for a safety‐critical system and its deteriorating sensor," Naval Research Logistics (NRL), John Wiley & Sons, vol. 64(5), pages 399-417, August.
    5. Stephen Chick & Martin Forster & Paolo Pertile, 2017. "A Bayesian decision theoretic model of sequential experimentation with delayed response," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 79(5), pages 1439-1462, November.
    6. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    7. Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
    8. Santiago R. Balseiro & David B. Brown, 2019. "Approximations to Stochastic Dynamic Programs via Information Relaxation Duality," Operations Research, INFORMS, vol. 67(2), pages 577-597, March.
    9. Williams, Byron K., 2011. "Resolving structural uncertainty in natural resources management using POMDP approaches," Ecological Modelling, Elsevier, vol. 222(5), pages 1092-1102.
    10. David B. Brown & Martin B. Haugh, 2017. "Information Relaxation Bounds for Infinite Horizon Markov Decision Processes," Operations Research, INFORMS, vol. 65(5), pages 1355-1379, October.
    11. José Niño-Mora, 2023. "Markovian Restless Bandits and Index Policies: A Review," Mathematics, MDPI, vol. 11(7), pages 1-27, March.
    12. Stephen E. Chick & Noah Gans & Özge Yapar, 2022. "Bayesian Sequential Learning for Clinical Trials of Multiple Correlated Medical Interventions," Management Science, INFORMS, vol. 68(7), pages 4919-4938, July.
    13. Amir Ali Nasrollahzadeh & Amin Khademi, 2022. "Dynamic Programming for Response-Adaptive Dose-Finding Clinical Trials," INFORMS Journal on Computing, INFORMS, vol. 34(2), pages 1176-1190, March.
    14. Yossi Aviv & Amit Pazgal, 2005. "A Partially Observed Markov Decision Process for Dynamic Pricing," Management Science, INFORMS, vol. 51(9), pages 1400-1416, September.
    15. Andres Alban & Stephen E. Chick & Martin Forster, 2023. "Value-Based Clinical Trials: Selecting Recruitment Rates and Trial Lengths in Different Regulatory Contexts," Management Science, INFORMS, vol. 69(6), pages 3516-3535, June.
    16. Panos Kouvelis & Joseph Milner & Zhili Tian, 2017. "Clinical Trials for New Drug Development: Optimal Investment and Application," Manufacturing & Service Operations Management, INFORMS, vol. 19(3), pages 437-452, July.
    17. David B. Brown & James E. Smith, 2020. "Index Policies and Performance Bounds for Dynamic Selection Problems," Management Science, INFORMS, vol. 66(7), pages 3029-3050, July.
    18. Hao Zhang, 2022. "Analytical Solution to a Discrete-Time Model for Dynamic Learning and Decision Making," Management Science, INFORMS, vol. 68(8), pages 5924-5957, August.
    19. Abhijit Gosavi, 2009. "Reinforcement Learning: A Tutorial Survey and Recent Advances," INFORMS Journal on Computing, INFORMS, vol. 21(2), pages 178-192, May.
    20. Juri Hinz, 2021. "On Approximate Solutions for Partially Observable Decision Problems," Research Paper Series 421, Quantitative Finance Research Centre, University of Technology, Sydney.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orijoc:v:32:y:4:i:2020:p:877-894. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.