Author
Listed:
- Santiago Balseiro
(Columbia Business School, Columbia University, New York 10027)
- Negin Golrezaei
(Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139)
- Mohammad Mahdian
(Google Research, New York, New York 10011)
- Vahab Mirrokni
(Google Research, New York, New York 10011)
- Jon Schneider
(Google Research, New York, New York 10011)
Abstract
In the classic contextual bandits problem, in each round t , a learner observes some context c , chooses some action i to perform, and receives some reward r i , t ( c ) . We consider the variant of this problem in which in addition to receiving the reward r i , t ( c ) , the learner also learns the values of r i , t ( c ′ ) for some other contexts c ′ in set O i ( c ) , that is, the rewards that would be achieved by performing that action under different contexts c ′ ∈ O i ( c ) . This variant arises in several strategic settings, such as learning how to bid in nontruthful repeated auctions, which has gained a lot of attention lately as many platforms have switched to running first price auctions. We call this problem the contextual bandits problem with cross-learning. The best algorithms for the classic contextual bandits problem achieve O ˜ ( CKT ) regret against all stationary policies, in which C is the number of contexts, K the number of actions, and T the number of rounds. We design and analyze new algorithms for the contextual bandits problem with cross-learning and show that their regret has better dependence on the number of contexts. Under complete cross-learning in which the rewards for all contexts are learned when choosing an action, that is, set O i ( c ) contains all contexts, we show that our algorithms achieve regret O ˜ ( K T ) , removing the dependence on C . For any other cases, that is, under partial cross-learning in which | O i ( c ) | < C for some context–action pair of ( i , c ), the regret bounds depend on how the sets O i ( c ) impact the degree to which cross-learning between contexts is possible. We simulate our algorithms on real auction data from an ad exchange running first price auctions and show that they outperform traditional contextual bandit algorithms.
Suggested Citation
Santiago Balseiro & Negin Golrezaei & Mohammad Mahdian & Vahab Mirrokni & Jon Schneider, 2023.
"Contextual Bandits with Cross-Learning,"
Mathematics of Operations Research, INFORMS, vol. 48(3), pages 1607-1629, August.
Handle:
RePEc:inm:ormoor:v:48:y:2023:i:3:p:1607-1629
DOI: 10.1287/moor.2022.1313
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormoor:v:48:y:2023:i:3:p:1607-1629. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.