IDEAS home Printed from https://ideas.repec.org/a/spr/annopr/v340y2024i2d10.1007_s10479-024-06165-4.html
   My bibliography  Save this article

Percentile optimization in multi-armed bandit problems

Author

Listed:
  • Zahra Ghatrani

    (University of Washington)

  • Archis Ghate

    (University of Washington)

Abstract

A multi-armed bandit (MAB) problem is described as follows. At each time-step, a decision-maker selects one arm from a finite set. A reward is earned from this arm and the state of that arm evolves stochastically. The goal is to determine an arm-pulling policy that maximizes expected total discounted reward over an infinite horizon. We study MAB problems where the rewards are multivariate Gaussian, to account for data-driven estimation errors. We employ a percentile optimization approach, wherein the goal is to find an arm-pulling policy that maximizes the sum of percentiles of expected total discounted rewards earned from individual arms. The idea is motivated by recent work on percentile optimization in Markov decision processes. We demonstrate that, when applied to MABs, this yields an intractable second-order cone program (SOCP) whose size is exponential in the number of arms. We use Lagrangian relaxation to break the resulting curse-of-dimensionality. Specifically, we show that the relaxed problem can be reformulated as an SOCP with size linear in the number of arms. We propose three approaches to recover feasible arm-pulling decisions during run-time from an off-line optimal solution of this SOCP. Our numerical experiments suggest that one of these three method appears to be more effective than the other two.

Suggested Citation

  • Zahra Ghatrani & Archis Ghate, 2024. "Percentile optimization in multi-armed bandit problems," Annals of Operations Research, Springer, vol. 340(2), pages 837-862, September.
  • Handle: RePEc:spr:annopr:v:340:y:2024:i:2:d:10.1007_s10479-024-06165-4
    DOI: 10.1007/s10479-024-06165-4
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10479-024-06165-4
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s10479-024-06165-4?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Erick Delage & Shie Mannor, 2010. "Percentile Optimization for Markov Decision Processes with Parameter Uncertainty," Operations Research, INFORMS, vol. 58(1), pages 203-213, February.
    2. Daniel Adelman & Adam J. Mersereau, 2008. "Relaxations of Weakly Coupled Stochastic Dynamic Programs," Operations Research, INFORMS, vol. 56(3), pages 712-727, June.
    3. Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
    4. Arnab Nilim & Laurent El Ghaoui, 2005. "Robust Control of Markov Decision Processes with Uncertain Transition Matrices," Operations Research, INFORMS, vol. 53(5), pages 780-798, October.
    5. A. Charnes & W. W. Cooper, 1959. "Chance-Constrained Programming," Management Science, INFORMS, vol. 6(1), pages 73-79, October.
    6. Shie Mannor & Duncan Simester & Peng Sun & John N. Tsitsiklis, 2007. "Bias and Variance Approximation in Value Function Estimates," Management Science, INFORMS, vol. 53(2), pages 308-322, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bren, Austin & Saghafian, Soroush, 2018. "Data-Driven Percentile Optimization for Multi-Class Queueing Systems with Model Ambiguity: Theory and Application," Working Paper Series rwp18-008, Harvard University, John F. Kennedy School of Government.
    2. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    3. Shie Mannor & Ofir Mebel & Huan Xu, 2016. "Robust MDPs with k -Rectangular Uncertainty," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1484-1509, November.
    4. David L. Kaufman & Andrew J. Schaefer, 2013. "Robust Modified Policy Iteration," INFORMS Journal on Computing, INFORMS, vol. 25(3), pages 396-410, August.
    5. Maximilian Blesch & Philipp Eisenhauer, 2023. "Robust Decision-Making under Risk and Ambiguity," Rationality and Competition Discussion Paper Series 463, CRC TRR 190 Rationality and Competition.
    6. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    7. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust Decision-Making Under Risk and Ambiguity," ECONtribute Discussion Papers Series 104, University of Bonn and University of Cologne, Germany.
    8. Varagapriya, V & Singh, Vikas Vikram & Lisser, Abdel, 2024. "Rank-1 transition uncertainties in constrained Markov decision processes," European Journal of Operational Research, Elsevier, vol. 318(1), pages 167-178.
    9. Huan Xu & Shie Mannor, 2012. "Distributionally Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 37(2), pages 288-300, May.
    10. Saghafian, Soroush, 2018. "Ambiguous partially observable Markov decision processes: Structural results and applications," Journal of Economic Theory, Elsevier, vol. 178(C), pages 1-35.
    11. Erick Delage & Shie Mannor, 2010. "Percentile Optimization for Markov Decision Processes with Parameter Uncertainty," Operations Research, INFORMS, vol. 58(1), pages 203-213, February.
    12. Zhu, Zhicheng & Xiang, Yisha & Zhao, Ming & Shi, Yue, 2023. "Data-driven remanufacturing planning with parameter uncertainty," European Journal of Operational Research, Elsevier, vol. 309(1), pages 102-116.
    13. Alireza Boloori & Soroush Saghafian & Harini A. Chakkera & Curtiss B. Cook, 2020. "Data-Driven Management of Post-transplant Medications: An Ambiguous Partially Observable Markov Decision Process Approach," Manufacturing & Service Operations Management, INFORMS, vol. 22(5), pages 1066-1087, September.
    14. Felipe Caro & Aparupa Das Gupta, 2022. "Robust control of the multi-armed bandit problem," Annals of Operations Research, Springer, vol. 317(2), pages 461-480, October.
    15. Shiau Hong Lim & Huan Xu & Shie Mannor, 2016. "Reinforcement Learning in Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1325-1353, November.
    16. Zeynep Turgay & Fikri Karaesmen & Egemen Lerzan Örmeci, 2018. "Structural properties of a class of robust inventory and queueing control problems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 65(8), pages 699-716, December.
    17. Boloori, Alireza & Saghafian, Soroush & Chakkera, Harini A. A. & Cook, Curtiss B., 2017. "Data-Driven Management of Post-transplant Medications: An APOMDP Approach," Working Paper Series rwp17-036, Harvard University, John F. Kennedy School of Government.
    18. Li Xia, 2020. "Risk‐Sensitive Markov Decision Processes with Combined Metrics of Mean and Variance," Production and Operations Management, Production and Operations Management Society, vol. 29(12), pages 2808-2827, December.
    19. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust decision-making under risk and ambiguity," Papers 2104.12573, arXiv.org, revised Oct 2021.
    20. Dan A. Iancu & Marek Petrik & Dharmashankar Subramanian, 2015. "Tight Approximations of Dynamic Risk Measures," Mathematics of Operations Research, INFORMS, vol. 40(3), pages 655-682, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:annopr:v:340:y:2024:i:2:d:10.1007_s10479-024-06165-4. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.