IDEAS home Printed from https://ideas.repec.org/a/inm/ormnsc/v70y2024i6p3951-3998.html
   My bibliography  Save this article

Optimal Learning for Structured Bandits

Author

Listed:
  • Bart Van Parys

    (Massachusetts Institute of Technology, Sloan School of Management, Cambridge, Massachusetts 02142)

  • Negin Golrezaei

    (Massachusetts Institute of Technology, Sloan School of Management, Cambridge, Massachusetts 02142)

Abstract

We study structured multiarmed bandits, which is the problem of online decision-making under uncertainty in the presence of structural information. In this problem, the decision-maker needs to discover the best course of action despite observing only uncertain rewards over time. The decision-maker is aware of certain convex structural information regarding the reward distributions; that is, the decision-maker knows that the reward distributions of the arms belong to a convex compact set. In the presence of such structural information, the decision-maker then would like to minimize his or her regret by exploiting this information, where the regret is its performance difference against a benchmark policy that knows the best action ahead of time. In the absence of structural information, the classical upper confidence bound (UCB) and Thomson sampling algorithms are well known to suffer minimal regret. However, as recently pointed out by Russo and Van Roy (2018) and Lattimore and Szepesvari (2017) , neither algorithm is capable of exploiting structural information that is commonly available in practice. We propose a novel learning algorithm that we call “DUSA,” whose regret matches the information-theoretic regret lower bound up to a constant factor and can handle a wide range of structural information. Our algorithm DUSA solves a dual counterpart of the regret lower bound at the empirical reward distribution and follows its suggested play. We show that this idea leads to the first computationally viable learning policy with asymptotic minimal regret for various structural information, including well-known structured bandits such as linear, Lipschitz, and convex bandits and novel structured bandits that have not been studied in the literature because of the lack of a unified and flexible framework.

Suggested Citation

  • Bart Van Parys & Negin Golrezaei, 2024. "Optimal Learning for Structured Bandits," Management Science, INFORMS, vol. 70(6), pages 3951-3998, June.
  • Handle: RePEc:inm:ormnsc:v:70:y:2024:i:6:p:3951-3998
    DOI: 10.1287/mnsc.2020.02108
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/mnsc.2020.02108
    Download Restriction: no

    File URL: https://libkey.io/10.1287/mnsc.2020.02108?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Daniel Russo & Benjamin Van Roy, 2018. "Learning to Optimize via Information-Directed Sampling," Operations Research, INFORMS, vol. 66(1), pages 230-252, January.
    2. Paat Rusmevichientong & John N. Tsitsiklis, 2010. "Linearly Parameterized Bandits," Mathematics of Operations Research, INFORMS, vol. 35(2), pages 395-411, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. David Simchi-Levi & Rui Sun & Huanan Zhang, 2022. "Online Learning and Optimization for Revenue Management Problems with Add-on Discounts," Management Science, INFORMS, vol. 68(10), pages 7402-7421, October.
    2. Mark Egan & Tomas Philipson, 2016. "Health Care Adherence and Personalized Medicine," Working Papers 2016-H01, Becker Friedman Institute for Research In Economics.
    3. Rong Jin & David Simchi-Levi & Li Wang & Xinshang Wang & Sen Yang, 2021. "Shrinking the Upper Confidence Bound: A Dynamic Product Selection Problem for Urban Warehouses," Management Science, INFORMS, vol. 67(8), pages 4756-4771, August.
    4. David B. Brown & James E. Smith, 2013. "Optimal Sequential Exploration: Bandits, Clairvoyants, and Wildcats," Operations Research, INFORMS, vol. 61(3), pages 644-665, June.
    5. Yuqing Zhang & Neil Walton, 2019. "Adaptive Pricing in Insurance: Generalized Linear Models and Gaussian Process Regression Approaches," Papers 1907.05381, arXiv.org.
    6. Shipra Agrawal & Vashist Avadhanula & Vineet Goyal & Assaf Zeevi, 2019. "MNL-Bandit: A Dynamic Learning Approach to Assortment Selection," Operations Research, INFORMS, vol. 67(5), pages 1453-1485, September.
    7. Daniel Russo & Benjamin Van Roy, 2014. "Learning to Optimize via Posterior Sampling," Mathematics of Operations Research, INFORMS, vol. 39(4), pages 1221-1243, November.
    8. Wang Chi Cheung & David Simchi-Levi & He Wang, 2017. "Technical Note—Dynamic Pricing and Demand Learning with Limited Price Experimentation," Operations Research, INFORMS, vol. 65(6), pages 1722-1731, December.
    9. Yining Wang & Xi Chen & Xiangyu Chang & Dongdong Ge, 2021. "Uncertainty Quantification for Demand Prediction in Contextual Dynamic Pricing," Production and Operations Management, Production and Operations Management Society, vol. 30(6), pages 1703-1717, June.
    10. Arnoud V. den Boer & N. Bora Keskin, 2020. "Discontinuous Demand Functions: Estimation and Pricing," Management Science, INFORMS, vol. 66(10), pages 4516-4534, October.
    11. Jeanine Miklós-Thal & Michael Raith & Matthew Selove, 2018. "What Are We Really Good At? Product Strategy with Uncertain Capabilities," Marketing Science, INFORMS, vol. 37(2), pages 294-309, March.
    12. Siddhartha Banerjee & Sujay Sanghavi & Sanjay Shakkottai, 2016. "Online Collaborative Filtering on Graphs," Operations Research, INFORMS, vol. 64(3), pages 756-769, June.
    13. Arnoud V. den Boer, 2014. "Dynamic Pricing with Multiple Products and Partially Specified Demand Distribution," Mathematics of Operations Research, INFORMS, vol. 39(3), pages 863-888, August.
    14. Bin Han & Ilya O. Ryzhov & Boris Defourny, 2016. "Optimal Learning in Linear Regression with Combinatorial Feature Selection," INFORMS Journal on Computing, INFORMS, vol. 28(4), pages 721-735, November.
    15. Yi Xiong & Ningyuan Chen & Xuefeng Gao & Xiang Zhou, 2022. "Sublinear regret for learning POMDPs," Production and Operations Management, Production and Operations Management Society, vol. 31(9), pages 3491-3504, September.
    16. Alper Atamtürk & Andrés Gómez, 2017. "Maximizing a Class of Utility Functions Over the Vertices of a Polytope," Operations Research, INFORMS, vol. 65(2), pages 433-445, March-Apr.
    17. Wang Chi Cheung & David Simchi-Levi & Ruihao Zhu, 2022. "Hedging the Drift: Learning to Optimize Under Nonstationarity," Management Science, INFORMS, vol. 68(3), pages 1696-1713, March.
    18. Hamsa Bastani & Mohsen Bayati, 2020. "Online Decision Making with High-Dimensional Covariates," Operations Research, INFORMS, vol. 68(1), pages 276-294, January.
    19. Chao Qin & Daniel Russo, 2024. "Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification," Papers 2402.10592, arXiv.org, revised Jul 2024.
    20. Hamsa Bastani & David Simchi-Levi & Ruihao Zhu, 2022. "Meta Dynamic Pricing: Transfer Learning Across Experiments," Management Science, INFORMS, vol. 68(3), pages 1865-1881, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:70:y:2024:i:6:p:3951-3998. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.