IDEAS home Printed from https://ideas.repec.org/p/ecl/stabus/3147.html
   My bibliography  Save this paper

Optimal Exploration-Exploitation in a Multi-armed-Bandit Problem with Non-stationary Rewards

Author

Listed:
  • Besbes, Omar

    (Columbia University)

  • Gur, Yonatan

    (Stanford University)

  • Zeevi, Assaf

    (Columbia University)

Abstract

In a multi-armed bandit (MAB) problem a gambler needs to choose at each round of play one of K arms, each characterized by an unknown reward distribution. Reward realizations are only observed when an arm is selected, and the gambler's objective is to maximize his cumulative expected earnings over some given horizon of play T. To do this, the gambler needs to acquire information about arms (exploration) while simultaneously optimizing immediate rewards (exploitation); the price paid due to this trade off is often referred to as the regret, and the main question is how small can this price be as a function of the horizon length T. This problem has been studied extensively when the reward distributions do not change over time; an assumption that supports a sharp characterization of the regret, yet is often violated in practical settings. In this paper, we focus on a MAB formulation which allows for a broad range of temporal uncertainties in the rewards, while still maintaining mathematical tractability. We fully characterize the (regret) complexity of this class of MAB problems by establishing a direct link between the extent of allowable reward "variation" and the minimal achievable regret. Our analysis draws some connections between two rather disparate strands of literature: the adversarial and the stochastic MAB frameworks.

Suggested Citation

  • Besbes, Omar & Gur, Yonatan & Zeevi, Assaf, 2014. "Optimal Exploration-Exploitation in a Multi-armed-Bandit Problem with Non-stationary Rewards," Research Papers 3147, Stanford University, Graduate School of Business.
  • Handle: RePEc:ecl:stabus:3147
    as

    Download full text from publisher

    File URL: http://www.gsb.stanford.edu/faculty-research/working-papers/optimal-exploration-exploitation-multi-armed-bandit-problem-non
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Felipe Caro & Aparupa Das Gupta, 2022. "Robust control of the multi-armed bandit problem," Annals of Operations Research, Springer, vol. 317(2), pages 461-480, October.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ecl:stabus:3147. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/gsstaus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.