IDEAS home Printed from https://ideas.repec.org/a/eee/spapps/v115y2005i5p769-779.html
   My bibliography  Save this article

Sample path optimality for a Markov optimization problem

Author

Listed:
  • Hunt, F.Y.

Abstract

We study a unichain Markov decision process i.e. a controlled Markov process whose state process under a stationary policy is an ergodic Markov chain. Here the state and action spaces are assumed to be either finite or countable. When the state process is uniformly ergodic and the immediate cost is bounded then a policy that minimizes the long-term expected average cost also has an nth stage sample path cost that with probability one is asymptotically less than the nth stage sample path cost under any other non-optimal stationary policy with a larger expected average cost. This is a strengthening in the Markov model case of the a.s. asymptotically optimal property frequently discussed in the literature.

Suggested Citation

  • Hunt, F.Y., 2005. "Sample path optimality for a Markov optimization problem," Stochastic Processes and their Applications, Elsevier, vol. 115(5), pages 769-779, May.
  • Handle: RePEc:eee:spapps:v:115:y:2005:i:5:p:769-779
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0304-4149(04)00190-5
    Download Restriction: Full text for ScienceDirect subscribers only
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Glynn, Peter W. & Ormoneit, Dirk, 2002. "Hoeffding's inequality for uniformly ergodic Markov chains," Statistics & Probability Letters, Elsevier, vol. 56(2), pages 143-146, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hachicha, Wafik & Ammeri, Ahmed & Masmoudi, Faouzi & Chachoub, Habib, 2010. "A comprehensive literature classification of simulation optimisation methods," MPRA Paper 27652, University Library of Munich, Germany.
    2. Rolando Cavazos-Cadena & Raúl Montes-de-Oca & Karel Sladký, 2014. "A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion," Journal of Optimization Theory and Applications, Springer, vol. 163(2), pages 674-684, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. H. S. Chang, 2004. "Technical Note: On Ordinal Comparison of Policies in Markov Reward Processes," Journal of Optimization Theory and Applications, Springer, vol. 122(1), pages 207-217, July.
    2. Yongqiang Tang, 2007. "A Hoeffding-Type Inequality for Ergodic Time Series," Journal of Theoretical Probability, Springer, vol. 20(2), pages 167-176, June.
    3. Liu, Jinpeng & Liu, Yuanyuan & Zhao, Yiqiang Q., 2022. "Augmented truncation approximations to the solution of Poisson’s equation for Markov chains," Applied Mathematics and Computation, Elsevier, vol. 414(C).
    4. Shie Mannor & John N. Tsitsiklis, 2005. "On the Empirical State-Action Frequencies in Markov Decision Processes Under General Policies," Mathematics of Operations Research, INFORMS, vol. 30(3), pages 545-561, August.
    5. Sandrić, Nikola & Šebek, Stjepan, 2023. "Hoeffding’s inequality for non-irreducible Markov models," Statistics & Probability Letters, Elsevier, vol. 200(C).
    6. Miasojedow, Błażej, 2014. "Hoeffding’s inequalities for geometrically ergodic Markov chains on general state space," Statistics & Probability Letters, Elsevier, vol. 87(C), pages 115-120.
    7. Renou, Ludovic & Tomala, Tristan, 2015. "Approximate implementation in Markovian environments," Journal of Economic Theory, Elsevier, vol. 159(PA), pages 401-442.
    8. Ahmad, I.A. & Amezziane, M., 2013. "Probability inequalities for bounded random vectors," Statistics & Probability Letters, Elsevier, vol. 83(4), pages 1136-1142.
    9. Penev, Spiridon & Peng, Hanxiang & Schick, Anton & Wefelmeyer, Wolfgang, 2004. "Efficient estimators for functionals of Markov chains with parametric marginals," Statistics & Probability Letters, Elsevier, vol. 66(3), pages 335-345, February.
    10. Ankush Agarwal & Stefano de Marco & Emmanuel Gobet & Gang Liu, 2017. "Rare event simulation related to financial risks: efficient estimation and sensitivity analysis," Working Papers hal-01219616, HAL.
    11. Svetlana Ekisheva & Mark Borodovsky, 2011. "Uniform Accuracy of the Maximum Likelihood Estimates for Probabilistic Models of Biological Sequences," Methodology and Computing in Applied Probability, Springer, vol. 13(1), pages 105-120, March.
    12. Boucher, Thomas R., 2009. "A Hoeffding inequality for Markov chains using a generalized inverse," Statistics & Probability Letters, Elsevier, vol. 79(8), pages 1105-1107, April.
    13. Choi, Michael C.H. & Li, Evelyn, 2019. "A Hoeffding’s inequality for uniformly ergodic diffusion process," Statistics & Probability Letters, Elsevier, vol. 150(C), pages 23-28.
    14. Lember, Jüri & Matzinger, Heinrich & Sova, Joonas & Zucca, Fabio, 2018. "Lower bounds for moments of global scores of pairwise Markov chains," Stochastic Processes and their Applications, Elsevier, vol. 128(5), pages 1678-1710.
    15. Chang, Hyeong Soo, 2006. "On convergence rate of the Shannon entropy rate of ergodic Markov chains via sample-path simulation," Statistics & Probability Letters, Elsevier, vol. 76(12), pages 1261-1264, July.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:spapps:v:115:y:2005:i:5:p:769-779. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/505572/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.