IDEAS home Printed from https://ideas.repec.org/a/spr/mathme/v100y2024i2d10.1007_s00186-024-00868-x.html
   My bibliography  Save this article

Low-complexity algorithm for restless bandits with imperfect observations

Author

Listed:
  • Keqin Liu

    (Xi’an Jiaotong-Liverpool University
    National Center for Applied Mathematics)

  • Richard Weber

    (University of Cambridge)

  • Chengzhong Zhang

    (National Center for Applied Mathematics)

Abstract

We consider a class of restless bandit problems that finds a broad application area in reinforcement learning and stochastic optimization. We consider N independent discrete-time Markov processes, each of which had two possible states: 1 and 0 (‘good’ and ‘bad’). Only if a process is both in state 1 and observed to be so does reward accrue. The aim is to maximize the expected discounted sum of returns over the infinite horizon subject to a constraint that only M $$(

Suggested Citation

  • Keqin Liu & Richard Weber & Chengzhong Zhang, 2024. "Low-complexity algorithm for restless bandits with imperfect observations," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 100(2), pages 467-508, October.
  • Handle: RePEc:spr:mathme:v:100:y:2024:i:2:d:10.1007_s00186-024-00868-x
    DOI: 10.1007/s00186-024-00868-x
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s00186-024-00868-x
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s00186-024-00868-x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Edward J. Sondik, 1978. "The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs," Operations Research, INFORMS, vol. 26(2), pages 282-304, April.
    2. Christos H. Papadimitriou & John N. Tsitsiklis, 1999. "The Complexity of Optimal Queuing Network Control," Mathematics of Operations Research, INFORMS, vol. 24(2), pages 293-305, May.
    3. David B. Brown & James E. Smith, 2020. "Index Policies and Performance Bounds for Dynamic Selection Problems," Management Science, INFORMS, vol. 66(7), pages 3029-3050, July.
    4. Nicolas Gast & Bruno Gaujal & Kimang Khun, 2023. "Testing indexability and computing Whittle and Gittins index in subcubic time," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 97(3), pages 391-436, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ya‐Tang Chuang & Manaf Zargoush & Somayeh Ghazalbash & Saied Samiedaluie & Kerry Kuluski & Sara Guilcher, 2023. "From prediction to decision: Optimizing long‐term care placements among older delayed discharge patients," Production and Operations Management, Production and Operations Management Society, vol. 32(4), pages 1041-1058, April.
    2. José Niño-Mora, 2023. "Markovian Restless Bandits and Index Policies: A Review," Mathematics, MDPI, vol. 11(7), pages 1-27, March.
    3. Zhang, Mimi, 2020. "A heuristic policy for maintaining multiple multi-state systems," Reliability Engineering and System Safety, Elsevier, vol. 203(C).
    4. José Niño-Mora, 2006. "Restless Bandit Marginal Productivity Indices, Diminishing Returns, and Optimal Control of Make-to-Order/Make-to-Stock M/G/1 Queues," Mathematics of Operations Research, INFORMS, vol. 31(1), pages 50-84, February.
    5. Santiago R. Balseiro & David B. Brown & Chen Chen, 2021. "Dynamic Pricing of Relocating Resources in Large Networks," Management Science, INFORMS, vol. 67(7), pages 4075-4094, July.
    6. Deligiannis, Michalis & Liberopoulos, George, 2023. "Dynamic ordering and buyer selection policies when service affects future demand," Omega, Elsevier, vol. 118(C).
    7. José Niño-Mora, 2020. "Fast Two-Stage Computation of an Index Policy for Multi-Armed Bandits with Setup Delays," Mathematics, MDPI, vol. 9(1), pages 1-36, December.
    8. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    9. Zhao, Yuxuan & Li, Xiangyong & Luo, Lan, 2025. "Dynamic allocation of display advertising impressions in dual sales channels," Omega, Elsevier, vol. 131(C).
    10. Song Lin & Juanjuan Zhang & John R. Hauser, 2015. "Learning from Experience, Simply," Marketing Science, INFORMS, vol. 34(1), pages 1-19, January.
    11. Glazebrook, K. D. & Mitchell, H. M. & Ansell, P. S., 2005. "Index policies for the maintenance of a collection of machines by a set of repairmen," European Journal of Operational Research, Elsevier, vol. 165(1), pages 267-284, August.
    12. Turgay Ayer & Can Zhang & Anthony Bonifonte & Anne C. Spaulding & Jagpreet Chhatwal, 2019. "Prioritizing Hepatitis C Treatment in U.S. Prisons," Operations Research, INFORMS, vol. 67(3), pages 853-873, May.
    13. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    14. Saghafian, Soroush, 2018. "Ambiguous partially observable Markov decision processes: Structural results and applications," Journal of Economic Theory, Elsevier, vol. 178(C), pages 1-35.
    15. Dong Li & Li Ding & Stephen Connor, 2020. "When to Switch? Index Policies for Resource Scheduling in Emergency Response," Production and Operations Management, Production and Operations Management Society, vol. 29(2), pages 241-262, February.
    16. Deep, Akash & Zhou, Shiyu & Veeramani, Dharmaraj & Chen, Yong, 2023. "Partially observable Markov decision process-based optimal maintenance planning with time-dependent observations," European Journal of Operational Research, Elsevier, vol. 311(2), pages 533-544.
    17. Doval, Laura, 2018. "Whether or not to open Pandora's box," Journal of Economic Theory, Elsevier, vol. 175(C), pages 127-158.
    18. Ricardo Montoya & Oded Netzer & Kamel Jedidi, 2010. "Dynamic Allocation of Pharmaceutical Detailing and Sampling for Long-Term Profitability," Marketing Science, INFORMS, vol. 29(5), pages 909-924, 09-10.
    19. Ece Zeliha Demirci & Joachim Arts & Geert-Jan Van Houtum, 2022. "A restless bandit approach for capacitated condition based maintenance scheduling," DEM Discussion Paper Series 22-01, Department of Economics at the University of Luxembourg.
    20. Michelle Opp & Kevin Glazebrook & Vidyadhar G. Kulkarni, 2005. "Outsourcing warranty repairs: Dynamic allocation," Naval Research Logistics (NRL), John Wiley & Sons, vol. 52(5), pages 381-398, August.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:mathme:v:100:y:2024:i:2:d:10.1007_s00186-024-00868-x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.