Robust Markov Decision Processes
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Jay K. Satia & Roy E. Lave, 1973. "Markovian Decision Processes with Uncertain Transition Probabilities," Operations Research, INFORMS, vol. 21(3), pages 728-740, June.
- Shie Mannor & Duncan Simester & Peng Sun & John N. Tsitsiklis, 2007. "Bias and Variance Approximation in Value Function Estimates," Management Science, INFORMS, vol. 53(2), pages 308-322, February.
- Chelsea C. White & Hany K. Eldeib, 1994. "Markov Decision Processes with Imprecise Transition Probabilities," Operations Research, INFORMS, vol. 42(4), pages 739-749, August.
- Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- David L. Kaufman & Andrew J. Schaefer, 2013. "Robust Modified Policy Iteration," INFORMS Journal on Computing, INFORMS, vol. 25(3), pages 396-410, August.
- Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
- V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
- Zeynep Turgay & Fikri Karaesmen & Egemen Lerzan Örmeci, 2018. "Structural properties of a class of robust inventory and queueing control problems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 65(8), pages 699-716, December.
- Varagapriya, V & Singh, Vikas Vikram & Lisser, Abdel, 2024. "Rank-1 transition uncertainties in constrained Markov decision processes," European Journal of Operational Research, Elsevier, vol. 318(1), pages 167-178.
- Andrew J. Keith & Darryl K. Ahner, 2021. "A survey of decision making and optimization under uncertainty," Annals of Operations Research, Springer, vol. 300(2), pages 319-353, May.
- Erick Delage & Shie Mannor, 2010. "Percentile Optimization for Markov Decision Processes with Parameter Uncertainty," Operations Research, INFORMS, vol. 58(1), pages 203-213, February.
- Zhu, Zhicheng & Xiang, Yisha & Zhao, Ming & Shi, Yue, 2023. "Data-driven remanufacturing planning with parameter uncertainty," European Journal of Operational Research, Elsevier, vol. 309(1), pages 102-116.
- Peter Buchholz & Dimitri Scheftelowitsch, 2019. "Computation of weighted sums of rewards for concurrent MDPs," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 89(1), pages 1-42, February.
- Felipe Caro & Aparupa Das Gupta, 2022. "Robust control of the multi-armed bandit problem," Annals of Operations Research, Springer, vol. 317(2), pages 461-480, October.
- Erim Kardeş & Fernando Ordóñez & Randolph W. Hall, 2011. "Discounted Robust Stochastic Games and an Application to Queueing Control," Operations Research, INFORMS, vol. 59(2), pages 365-382, April.
- Shiau Hong Lim & Huan Xu & Shie Mannor, 2016. "Reinforcement Learning in Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1325-1353, November.
- Shie Mannor & Ofir Mebel & Huan Xu, 2016. "Robust MDPs with k -Rectangular Uncertainty," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1484-1509, November.
- Arthur Flajolet & Sébastien Blandin & Patrick Jaillet, 2018. "Robust Adaptive Routing Under Uncertainty," Operations Research, INFORMS, vol. 66(1), pages 210-229, January.
- Bren, Austin & Saghafian, Soroush, 2018. "Data-Driven Percentile Optimization for Multi-Class Queueing Systems with Model Ambiguity: Theory and Application," Working Paper Series rwp18-008, Harvard University, John F. Kennedy School of Government.
- Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
- Zahra Ghatrani & Archis Ghate, 2024. "Percentile optimization in multi-armed bandit problems," Annals of Operations Research, Springer, vol. 340(2), pages 837-862, September.
- Huan Xu & Shie Mannor, 2012. "Distributionally Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 37(2), pages 288-300, May.
- Maximilian Blesch & Philipp Eisenhauer, 2023. "Robust Decision-Making under Risk and Ambiguity," Rationality and Competition Discussion Paper Series 463, CRC TRR 190 Rationality and Competition.
- D. Škulj & R. Hable, 2013. "Coefficients of ergodicity for Markov chains with uncertain parameters," Metrika: International Journal for Theoretical and Applied Statistics, Springer, vol. 76(1), pages 107-133, January.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:com:wpaper:034. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Anil Khuman (email available below). General contact details of provider: http://www.comisef.eu .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.