Truncation of Markov decision problems with a queueing network overflow control application
Author
Abstract
Suggested Citation
Download full text from publisher
References listed on IDEAS
- Amedeo R. Odoni, 1969. "On Finding the Maximal Gain for Markov Decision Processes," Operations Research, INFORMS, vol. 17(5), pages 857-860, October.
- A. Hordijk & L. C. M. Kallenberg, 1979. "Linear Programming and Markov Decision Chains," Management Science, INFORMS, vol. 25(4), pages 352-362, April.
- Dijk, N.M. van, 1988. "Approximate uniformization for continuous-time Markov chains with an application to performability analysis," Serie Research Memoranda 0054, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.
- A. Hordijk & L. C. M. Kallenberg, 1984. "Constrained Undiscounted Stochastic Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 9(2), pages 276-289, May.
- Martin L. Puterman & Moon Chirl Shin, 1982. "Action Elimination Procedures for Modified Policy Iteration Algorithms," Operations Research, INFORMS, vol. 30(2), pages 301-318, April.
- Ward Whitt, 1978. "Approximations of Dynamic Programs, I," Mathematics of Operations Research, INFORMS, vol. 3(3), pages 231-243, August.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Lodewijk Kallenberg, 2013. "Derman’s book as inspiration: some results on LP for MDPs," Annals of Operations Research, Springer, vol. 208(1), pages 63-94, September.
- Dijk, N.M. van, 1989. "The importance of bias-terms for error bounds and comparison results," Serie Research Memoranda 0036, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.
- Dmitry Krass & O. J. Vrieze, 2002. "Achieving Target State-Action Frequencies in Multichain Average-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 27(3), pages 545-566, August.
- Pelin Canbolat & Uriel Rothblum, 2013. "(Approximate) iterated successive approximations algorithm for sequential decision processes," Annals of Operations Research, Springer, vol. 208(1), pages 309-320, September.
- Vivek S. Borkar & Vladimir Gaitsgory, 2019. "Linear Programming Formulation of Long-Run Average Optimal Control Problem," Journal of Optimization Theory and Applications, Springer, vol. 181(1), pages 101-125, April.
- N. M. Van Dijk & K. Sladký, 1999. "Error Bounds for Nonnegative Dynamic Models," Journal of Optimization Theory and Applications, Springer, vol. 101(2), pages 449-474, May.
- Silvia Florio & Wolfgang Runggaldier, 1999. "On hedging in finite security markets," Applied Mathematical Finance, Taylor & Francis Journals, vol. 6(3), pages 159-176.
- Jérôme Renault & Xavier Venel, 2017.
"Long-Term Values in Markov Decision Processes and Repeated Games, and a New Distance for Probability Spaces,"
Mathematics of Operations Research, INFORMS, vol. 42(2), pages 349-376, May.
- Jérôme Renault & Xavier Venel, 2017. "Long-term values in Markov Decision Processes and Repeated Games, and a new distance for probability spaces," Université Paris1 Panthéon-Sorbonne (Post-Print and Working Papers) hal-01396680, HAL.
- Jérôme Renault & Xavier Venel, 2017. "Long-term values in Markov Decision Processes and Repeated Games, and a new distance for probability spaces," Post-Print hal-01396680, HAL.
- Jérôme Renault & Xavier Venel, 2017. "Long-term values in Markov Decision Processes and Repeated Games, and a new distance for probability spaces," PSE-Ecole d'économie de Paris (Postprint) hal-01396680, HAL.
- Dellaert, N. P. & Melo, M. T., 1996. "Production strategies for a stochastic lot-sizing problem with constant capacity," European Journal of Operational Research, Elsevier, vol. 92(2), pages 281-301, July.
- Daniel F. Silva & Bo Zhang & Hayriye Ayhan, 2018. "Admission control strategies for tandem Markovian loss systems," Queueing Systems: Theory and Applications, Springer, vol. 90(1), pages 35-63, October.
- Richard T. Boylan & Bente Villadsen, "undated". "A Bellman's Equation for the Study of Income Smoothing," Computing in Economics and Finance 1996 _009, Society for Computational Economics.
- Vladimir Ejov & Jerzy A. Filar & Michael Haythorpe & Giang T. Nguyen, 2009. "Refined MDP-Based Branch-and-Fix Algorithm for the Hamiltonian Cycle Problem," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 758-768, August.
- D. P. de Farias & B. Van Roy, 2003. "The Linear Programming Approach to Approximate Dynamic Programming," Operations Research, INFORMS, vol. 51(6), pages 850-865, December.
- Nielsen, Lars Relund & Kristensen, Anders Ringgaard, 2006. "Finding the K best policies in a finite-horizon Markov decision process," European Journal of Operational Research, Elsevier, vol. 175(2), pages 1164-1179, December.
- Robert Kirkby Author-Email: robertkirkby@gmail.com|, 2017. "Convergence of Discretized Value Function Iteration," Computational Economics, Springer;Society for Computational Economics, vol. 49(1), pages 117-153, January.
- Benjamin Van Roy, 2006. "Performance Loss Bounds for Approximate Value Iteration with State Aggregation," Mathematics of Operations Research, INFORMS, vol. 31(2), pages 234-244, May.
- Karel Sladký, 2007. "Stochastic Growth Models With No Discounting [Stochastické růstové modely bez diskontování]," Acta Oeconomica Pragensia, Prague University of Economics and Business, vol. 2007(4), pages 88-98.
- Purba Das & T. Parthasarathy & G. Ravindran, 2022. "On Completely Mixed Stochastic Games," SN Operations Research Forum, Springer, vol. 3(4), pages 1-26, December.
- Prasenjit Mondal, 2020. "Computing semi-stationary optimal policies for multichain semi-Markov decision processes," Annals of Operations Research, Springer, vol. 287(2), pages 843-865, April.
- Dellaert, N. P. & Melo, M. T., 1998. "Make-to-order policies for a stochastic lot-sizing problem using overtime," International Journal of Production Economics, Elsevier, vol. 56(1), pages 79-97, September.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:vua:wpaper:1989-65. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: R. Dam (email available below). General contact details of provider: https://edirc.repec.org/data/fewvunl.html .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.