IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v320y2025i3p616-627.html
   My bibliography  Save this article

Simple fixes that accommodate switching costs in multi-armed bandits

Author

Listed:
  • Teymourian, Ehsan
  • Yang, Jian

Abstract

When switching costs are added to the multi-armed bandit (MAB) problem where the arms’ random reward distributions are previously unknown, usually quite different techniques than those for pure MAB are required. We find that two simple fixes on the existing upper-confidence-bound (UCB) policy can work well for MAB with switching costs (MAB-SC). Two cases should be distinguished. One is with positive-gap ambiguity where the performance gap between the leading and lagging arms is known to be at least some δ>0. For this, our fix is to erect barriers that discourage frivolous arm switchings. The other is with zero-gap ambiguity where absolutely nothing is known. We remedy this by forcing the same arms to be pulled in increasingly prolonged intervals. As usual, the effectivenesses of our fixes are measured by the worst average regrets over long time horizons T. When the barriers are fixed at δ/2, we can accomplish a ln(T)-sized regret bound for the positive-gap case. When intervals are such that n of them occupy n2 periods, we can achieve the best possible T1/2-sized regret bound for the zero-gap case. Other than UCB, these fixes can be applied to a learning while doing (LWD) heuristic to reach satisfactory results as well. While not yet with the best theoretical guarantees, the LWD-based policies have empirically outperformed those based on UCB and other known alternatives. Numerically competitive policies still include ones resulting from interval-based fixes on Thompson sampling (TS).

Suggested Citation

  • Teymourian, Ehsan & Yang, Jian, 2025. "Simple fixes that accommodate switching costs in multi-armed bandits," European Journal of Operational Research, Elsevier, vol. 320(3), pages 616-627.
  • Handle: RePEc:eee:ejores:v:320:y:2025:i:3:p:616-627
    DOI: 10.1016/j.ejor.2024.09.017
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221724007203
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2024.09.017?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Malekipirbazari, Milad & Çavuş, Özlem, 2024. "Index policy for multiarmed bandit problem with dynamic risk measures," European Journal of Operational Research, Elsevier, vol. 312(2), pages 627-640.
    2. Banks, Jeffrey S & Sundaram, Rangarajan K, 1994. "Switching Costs and the Gittins Index," Econometrica, Econometric Society, vol. 62(3), pages 687-694, May.
    3. Xu, Jianyu & Chen, Lujie & Tang, Ou, 2021. "An online algorithm for the risk-aware restless bandit," European Journal of Operational Research, Elsevier, vol. 290(2), pages 622-639.
    4. Ali Yekkehkhany & Ebrahim Arian & Rakesh Nagi & Ilan Shomorony, 2021. "A cost–based analysis for risk–averse explore–then–commit finite–time bandits," IISE Transactions, Taylor & Francis Journals, vol. 53(10), pages 1094-1108, October.
    5. Brezzi, Monica & Lai, Tze Leung, 2002. "Optimal learning and experimentation in bandit problems," Journal of Economic Dynamics and Control, Elsevier, vol. 27(1), pages 87-108, November.
    6. Daniel Russo & Benjamin Van Roy, 2014. "Learning to Optimize via Posterior Sampling," Mathematics of Operations Research, INFORMS, vol. 39(4), pages 1221-1243, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Malekipirbazari, Milad, 2025. "Optimizing sequential decision-making under risk: Strategic allocation with switching penalties," European Journal of Operational Research, Elsevier, vol. 321(1), pages 160-176.
    2. José Niño-Mora, 2023. "Markovian Restless Bandits and Index Policies: A Review," Mathematics, MDPI, vol. 11(7), pages 1-27, March.
    3. Michael Jong Kim, 2020. "Variance Regularization in Sequential Bayesian Optimization," Mathematics of Operations Research, INFORMS, vol. 45(3), pages 966-992, August.
    4. Eric M. Schwartz & Eric T. Bradlow & Peter S. Fader, 2017. "Customer Acquisition via Display Advertising Using Multi-Armed Bandit Experiments," Marketing Science, INFORMS, vol. 36(4), pages 500-522, July.
    5. Noah Gans & George Knox & Rachel Croson, 2007. "Simple Models of Discrete Choice and Their Performance in Bandit Experiments," Manufacturing & Service Operations Management, INFORMS, vol. 9(4), pages 383-408, December.
    6. David Simchi-Levi & Rui Sun & Huanan Zhang, 2022. "Online Learning and Optimization for Revenue Management Problems with Add-on Discounts," Management Science, INFORMS, vol. 68(10), pages 7402-7421, October.
    7. Hamsa Bastani & David Simchi-Levi & Ruihao Zhu, 2022. "Meta Dynamic Pricing: Transfer Learning Across Experiments," Management Science, INFORMS, vol. 68(3), pages 1865-1881, March.
    8. Zhengyuan Zhou & Susan Athey & Stefan Wager, 2023. "Offline Multi-Action Policy Learning: Generalization and Optimization," Operations Research, INFORMS, vol. 71(1), pages 148-183, January.
    9. Ruohan Zhan & Zhimei Ren & Susan Athey & Zhengyuan Zhou, 2024. "Policy Learning with Adaptively Collected Data," Management Science, INFORMS, vol. 70(8), pages 5270-5297, August.
    10. Konon, Alexander, 2016. "Career choice under uncertainty," VfS Annual Conference 2016 (Augsburg): Demographic Change 145583, Verein für Socialpolitik / German Economic Association.
    11. Rong Jin & David Simchi-Levi & Li Wang & Xinshang Wang & Sen Yang, 2021. "Shrinking the Upper Confidence Bound: A Dynamic Product Selection Problem for Urban Warehouses," Management Science, INFORMS, vol. 67(8), pages 4756-4771, August.
    12. Theodore Papageorgiou, 2022. "Occupational Matching and Cities," American Economic Journal: Macroeconomics, American Economic Association, vol. 14(3), pages 82-132, July.
    13. José Niño-Mora, 2020. "Fast Two-Stage Computation of an Index Policy for Multi-Armed Bandits with Setup Delays," Mathematics, MDPI, vol. 9(1), pages 1-36, December.
    14. Raluca M. Ursu & Qingliang Wang & Pradeep K. Chintagunta, 2020. "Search Duration," Marketing Science, INFORMS, vol. 39(5), pages 849-871, September.
    15. Victor F. Araman & René A. Caldentey, 2022. "Diffusion Approximations for a Class of Sequential Experimentation Problems," Management Science, INFORMS, vol. 68(8), pages 5958-5979, August.
    16. Xiangyu Gao & Stefanus Jasin & Sajjad Najafi & Huanan Zhang, 2022. "Joint Learning and Optimization for Multi-Product Pricing (and Ranking) Under a General Cascade Click Model," Management Science, INFORMS, vol. 68(10), pages 7362-7382, October.
    17. Agrawal, Priyank & Tulabandhula, Theja & Avadhanula, Vashist, 2023. "A tractable online learning algorithm for the multinomial logit contextual bandit," European Journal of Operational Research, Elsevier, vol. 310(2), pages 737-750.
    18. John R. Hauser & Guilherme (Gui) Liberali & Glen L. Urban, 2014. "Website Morphing 2.0: Switching Costs, Partial Exposure, Random Exit, and When to Morph," Management Science, INFORMS, vol. 60(6), pages 1594-1616, June.
    19. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    20. Stephen E. Chick & Noah Gans, 2009. "Economic Analysis of Simulation Selection Problems," Management Science, INFORMS, vol. 55(3), pages 421-437, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:320:y:2025:i:3:p:616-627. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.