Shrinking the Upper Confidence Bound: A Dynamic Product Selection Problem for Urban Warehouses
Author
Abstract
Suggested Citation
DOI: 10.1287/mnsc.2020.3773
Download full text from publisher
References listed on IDEAS
- Hamsa Bastani & Mohsen Bayati, 2020. "Online Decision Making with High-Dimensional Covariates," Operations Research, INFORMS, vol. 68(1), pages 276-294, January.
- Jean-Yves Audibert & Sébastien Bubeck & Gábor Lugosi, 2014. "Regret in Online Combinatorial Optimization," Mathematics of Operations Research, INFORMS, vol. 39(1), pages 31-45, February.
- Paat Rusmevichientong & John N. Tsitsiklis, 2010. "Linearly Parameterized Bandits," Mathematics of Operations Research, INFORMS, vol. 35(2), pages 395-411, May.
- Daniel Russo & Benjamin Van Roy, 2014. "Learning to Optimize via Posterior Sampling," Mathematics of Operations Research, INFORMS, vol. 39(4), pages 1221-1243, November.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- David Simchi-Levi & Rui Sun & Huanan Zhang, 2022. "Online Learning and Optimization for Revenue Management Problems with Add-on Discounts," Management Science, INFORMS, vol. 68(10), pages 7402-7421, October.
- Bin Han & Ilya O. Ryzhov & Boris Defourny, 2016. "Optimal Learning in Linear Regression with Combinatorial Feature Selection," INFORMS Journal on Computing, INFORMS, vol. 28(4), pages 721-735, November.
- Wang Chi Cheung & David Simchi-Levi & Ruihao Zhu, 2022. "Hedging the Drift: Learning to Optimize Under Nonstationarity," Management Science, INFORMS, vol. 68(3), pages 1696-1713, March.
- Hamsa Bastani & David Simchi-Levi & Ruihao Zhu, 2022. "Meta Dynamic Pricing: Transfer Learning Across Experiments," Management Science, INFORMS, vol. 68(3), pages 1865-1881, March.
- Yining Wang & Boxiao Chen & David Simchi-Levi, 2021. "Multimodal Dynamic Pricing," Management Science, INFORMS, vol. 67(10), pages 6136-6152, October.
- Masahiro Kato & Shinji Ito, 2023. "Best-of-Both-Worlds Linear Contextual Bandits," Papers 2312.16489, arXiv.org.
- Ying Zhong & L. Jeff Hong & Guangwu Liu, 2021. "Earning and Learning with Varying Cost," Production and Operations Management, Production and Operations Management Society, vol. 30(8), pages 2379-2394, August.
- Ruohan Zhan & Zhimei Ren & Susan Athey & Zhengyuan Zhou, 2021.
"Policy Learning with Adaptively Collected Data,"
Papers
2105.02344, arXiv.org, revised Nov 2022.
- Zhan, Ruohan & Ren, Zhimei & Athey, Susan & Zhou, Zhengyuan, 2021. "Policy Learning with Adaptively Collected Data," Research Papers 3963, Stanford University, Graduate School of Business.
- Daniel Russo & Benjamin Van Roy, 2018. "Learning to Optimize via Information-Directed Sampling," Operations Research, INFORMS, vol. 66(1), pages 230-252, January.
- Eric M. Schwartz & Eric T. Bradlow & Peter S. Fader, 2017. "Customer Acquisition via Display Advertising Using Multi-Armed Bandit Experiments," Marketing Science, INFORMS, vol. 36(4), pages 500-522, July.
- Mark Egan & Tomas Philipson, 2016. "Health Care Adherence and Personalized Medicine," Working Papers 2016-H01, Becker Friedman Institute for Research In Economics.
- Rad Niazadeh & Negin Golrezaei & Joshua Wang & Fransisca Susan & Ashwinkumar Badanidiyuru, 2023. "Online Learning via Offline Greedy Algorithms: Applications in Market Design and Optimization," Management Science, INFORMS, vol. 69(7), pages 3797-3817, July.
- Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
- David B. Brown & James E. Smith, 2013. "Optimal Sequential Exploration: Bandits, Clairvoyants, and Wildcats," Operations Research, INFORMS, vol. 61(3), pages 644-665, June.
- Yuqing Zhang & Neil Walton, 2019. "Adaptive Pricing in Insurance: Generalized Linear Models and Gaussian Process Regression Approaches," Papers 1907.05381, arXiv.org.
- Xi Chen & Quanquan Liu & Yining Wang, 2023. "Active Learning for Contextual Search with Binary Feedback," Management Science, INFORMS, vol. 69(4), pages 2165-2181, April.
- Shipra Agrawal & Vashist Avadhanula & Vineet Goyal & Assaf Zeevi, 2019. "MNL-Bandit: A Dynamic Learning Approach to Assortment Selection," Operations Research, INFORMS, vol. 67(5), pages 1453-1485, September.
- Pourbabaee, Farzad, 2021. "High dimensional decision making, upper and lower bounds," Economics Letters, Elsevier, vol. 204(C).
- Anand Kalvit & Aleksandrs Slivkins & Yonatan Gur, 2024. "Incentivized Exploration via Filtered Posterior Sampling," Papers 2402.13338, arXiv.org.
- Claudio Cardoso Flores & Marcelo Cunha Medeiros, 2020. "Online Action Learning in High Dimensions: A Conservative Perspective," Papers 2009.13961, arXiv.org, revised Mar 2024.
More about this item
Keywords
sequential decision-making; product selection; online learning; online retailing; regret analysis;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormnsc:v:67:y:2021:i:8:p:4756-4771. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.