IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2312.16489.html
   My bibliography  Save this paper

Best-of-Both-Worlds Linear Contextual Bandits

Author

Listed:
  • Masahiro Kato
  • Shinji Ito

Abstract

This study investigates the problem of $K$-armed linear contextual bandits, an instance of the multi-armed bandit problem, under an adversarial corruption. At each round, a decision-maker observes an independent and identically distributed context and then selects an arm based on the context and past observations. After selecting an arm, the decision-maker incurs a loss corresponding to the selected arm. The decision-maker aims to minimize the cumulative loss over the trial. The goal of this study is to develop a strategy that is effective in both stochastic and adversarial environments, with theoretical guarantees. We first formulate the problem by introducing a novel setting of bandits with adversarial corruption, referred to as the contextual adversarial regime with a self-bounding constraint. We assume linear models for the relationship between the loss and the context. Then, we propose a strategy that extends the RealLinExp3 by Neu & Olkhovskaya (2020) and the Follow-The-Regularized-Leader (FTRL). The regret of our proposed algorithm is shown to be upper-bounded by $O\left(\min\left\{\frac{(\log(T))^3}{\Delta_{*}} + \sqrt{\frac{C(\log(T))^3}{\Delta_{*}}},\ \ \sqrt{T}(\log(T))^2\right\}\right)$, where $T \in\mathbb{N}$ is the number of rounds, $\Delta_{*} > 0$ is the constant minimum gap between the best and suboptimal arms for any context, and $C\in[0, T] $ is an adversarial corruption parameter. This regret upper bound implies $O\left(\frac{(\log(T))^3}{\Delta_{*}}\right)$ in a stochastic environment and by $O\left( \sqrt{T}(\log(T))^2\right)$ in an adversarial environment. We refer to our strategy as the Best-of-Both-Worlds (BoBW) RealFTRL, due to its theoretical guarantees in both stochastic and adversarial regimes.

Suggested Citation

  • Masahiro Kato & Shinji Ito, 2023. "Best-of-Both-Worlds Linear Contextual Bandits," Papers 2312.16489, arXiv.org.
  • Handle: RePEc:arx:papers:2312.16489
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2312.16489
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Hamsa Bastani & Mohsen Bayati, 2020. "Online Decision Making with High-Dimensional Covariates," Operations Research, INFORMS, vol. 68(1), pages 276-294, January.
    2. Paat Rusmevichientong & John N. Tsitsiklis, 2010. "Linearly Parameterized Bandits," Mathematics of Operations Research, INFORMS, vol. 35(2), pages 395-411, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Rong Jin & David Simchi-Levi & Li Wang & Xinshang Wang & Sen Yang, 2021. "Shrinking the Upper Confidence Bound: A Dynamic Product Selection Problem for Urban Warehouses," Management Science, INFORMS, vol. 67(8), pages 4756-4771, August.
    2. Yining Wang & Boxiao Chen & David Simchi-Levi, 2021. "Multimodal Dynamic Pricing," Management Science, INFORMS, vol. 67(10), pages 6136-6152, October.
    3. Ying Zhong & L. Jeff Hong & Guangwu Liu, 2021. "Earning and Learning with Varying Cost," Production and Operations Management, Production and Operations Management Society, vol. 30(8), pages 2379-2394, August.
    4. David Simchi-Levi & Rui Sun & Huanan Zhang, 2022. "Online Learning and Optimization for Revenue Management Problems with Add-on Discounts," Management Science, INFORMS, vol. 68(10), pages 7402-7421, October.
    5. Hamsa Bastani & David Simchi-Levi & Ruihao Zhu, 2022. "Meta Dynamic Pricing: Transfer Learning Across Experiments," Management Science, INFORMS, vol. 68(3), pages 1865-1881, March.
    6. Mark Egan & Tomas Philipson, 2016. "Health Care Adherence and Personalized Medicine," Working Papers 2016-H01, Becker Friedman Institute for Research In Economics.
    7. Agrawal, Priyank & Tulabandhula, Theja & Avadhanula, Vashist, 2023. "A tractable online learning algorithm for the multinomial logit contextual bandit," European Journal of Operational Research, Elsevier, vol. 310(2), pages 737-750.
    8. Haihui Shen & L. Jeff Hong & Xiaowei Zhang, 2021. "Ranking and Selection with Covariates for Personalized Decision Making," INFORMS Journal on Computing, INFORMS, vol. 33(4), pages 1500-1519, October.
    9. Mark Egan & Tomas J. Philipson, 2014. "Health Care Adherence and Personalized Medicine," NBER Working Papers 20330, National Bureau of Economic Research, Inc.
    10. Kimia Keshanian & Daniel Zantedeschi & Kaushik Dutta, 2022. "Features Selection as a Nash-Bargaining Solution: Applications in Online Advertising and Information Systems," INFORMS Journal on Computing, INFORMS, vol. 34(5), pages 2485-2501, September.
    11. David B. Brown & James E. Smith, 2013. "Optimal Sequential Exploration: Bandits, Clairvoyants, and Wildcats," Operations Research, INFORMS, vol. 61(3), pages 644-665, June.
    12. Yuqing Zhang & Neil Walton, 2019. "Adaptive Pricing in Insurance: Generalized Linear Models and Gaussian Process Regression Approaches," Papers 1907.05381, arXiv.org.
    13. Xi Chen & Quanquan Liu & Yining Wang, 2023. "Active Learning for Contextual Search with Binary Feedback," Management Science, INFORMS, vol. 69(4), pages 2165-2181, April.
    14. Shipra Agrawal & Vashist Avadhanula & Vineet Goyal & Assaf Zeevi, 2019. "MNL-Bandit: A Dynamic Learning Approach to Assortment Selection," Operations Research, INFORMS, vol. 67(5), pages 1453-1485, September.
    15. Long He & Sheng Liu & Zuo‐Jun Max Shen, 2022. "Smart urban transport and logistics: A business analytics perspective," Production and Operations Management, Production and Operations Management Society, vol. 31(10), pages 3771-3787, October.
    16. Pourbabaee, Farzad, 2021. "High dimensional decision making, upper and lower bounds," Economics Letters, Elsevier, vol. 204(C).
    17. Claudio Cardoso Flores & Marcelo Cunha Medeiros, 2020. "Online Action Learning in High Dimensions: A Conservative Perspective," Papers 2009.13961, arXiv.org, revised Mar 2024.
    18. Bart Van Parys & Negin Golrezaei, 2024. "Optimal Learning for Structured Bandits," Management Science, INFORMS, vol. 70(6), pages 3951-3998, June.
    19. Daniel Russo & Benjamin Van Roy, 2014. "Learning to Optimize via Posterior Sampling," Mathematics of Operations Research, INFORMS, vol. 39(4), pages 1221-1243, November.
    20. N. Bora Keskin & Assaf Zeevi, 2017. "Chasing Demand: Learning and Earning in a Changing Environment," Mathematics of Operations Research, INFORMS, vol. 42(2), pages 277-307, May.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2312.16489. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.