IDEAS home Printed from https://ideas.repec.org/a/eee/ejores/v238y2014i2p486-496.html
   My bibliography  Save this article

Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints

Author

Listed:
  • Guo, Xianping
  • Zhang, Wenzhao

Abstract

In this paper we consider the convergence of a sequence {Mn} of the models of discounted continuous-time constrained Markov decision processes (MDP) to the “limit” one, denoted by M∞. For the models with denumerable states and unbounded transition rates, under reasonably mild conditions we prove that the (constrained) optimal policies and the optimal values of {Mn} converge to those of M∞, respectively, using a technique of occupation measures. As an application of the convergence result developed here, we show that an optimal policy and the optimal value for countable-state continuous-time MDP can be approximated by those of finite-state continuous-time MDP. Finally, we further illustrate such finite-state approximation by solving numerically a controlled birth-and-death system and also give the corresponding error bound of the approximation.

Suggested Citation

  • Guo, Xianping & Zhang, Wenzhao, 2014. "Convergence of controlled models and finite-state approximation for discounted continuous-time Markov decision processes with constraints," European Journal of Operational Research, Elsevier, vol. 238(2), pages 486-496.
  • Handle: RePEc:eee:ejores:v:238:y:2014:i:2:p:486-496
    DOI: 10.1016/j.ejor.2014.03.037
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0377221714002768
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.ejor.2014.03.037?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Xianping Guo & Alexei Piunovskiy, 2011. "Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates," Mathematics of Operations Research, INFORMS, vol. 36(1), pages 105-132, February.
    2. Jorge Alvarez-Mena & Onésimo Hernández-Lerma, 2002. "Convergence of the optimal values of constrained Markov control processes," The Annals of Regional Science, Springer;Western Regional Science Association, vol. 55(3), pages 461-484, June.
    3. Cervellera, C. & Macciò, D., 2011. "A comparison of global and semi-local approximation in T-stage stochastic optimization," European Journal of Operational Research, Elsevier, vol. 208(2), pages 109-118, January.
    4. Jorge Alvarez-Mena & Onésimo Hernández-Lerma, 2006. "Existence of nash equilibria for constrained stochastic games," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 63(2), pages 261-285, May.
    5. Eugene A. Feinberg, 2000. "Constrained Discounted Markov Decision Processes and Hamiltonian Cycles," Mathematics of Operations Research, INFORMS, vol. 25(1), pages 130-140, February.
    6. Eugene A. Feinberg, 2004. "Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach," Mathematics of Operations Research, INFORMS, vol. 29(3), pages 492-524, August.
    7. Jorge Alvarez-Mena & Onésimo Hernández-Lerma, 2002. "Convergence of the optimal values of constrained Markov control processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 55(3), pages 461-484, June.
    8. Alexey Piunovskiy & Yi Zhang, 2012. "The Transformation Method for Continuous-Time Markov Decision Processes," Journal of Optimization Theory and Applications, Springer, vol. 154(2), pages 691-712, August.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Tomás Prieto-Rumeau & José Lorenzo, 2015. "Approximation of zero-sum continuous-time Markov games under the discounted payoff criterion," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 23(3), pages 799-836, October.
    2. Qingda Wei, 2016. "Continuous-time Markov decision processes with risk-sensitive finite-horizon cost criterion," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 84(3), pages 461-487, December.
    3. Ping Cao & Jingui Xie, 2016. "Optimal control of a multiclass queueing system when customers can change types," Queueing Systems: Theory and Applications, Springer, vol. 82(3), pages 285-313, April.
    4. Qingda Wei, 2017. "Finite approximation for finite-horizon continuous-time Markov decision processes," 4OR, Springer, vol. 15(1), pages 67-84, March.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wenzhao Zhang, 2019. "Discrete-Time Constrained Average Stochastic Games with Independent State Processes," Mathematics, MDPI, vol. 7(11), pages 1-18, November.
    2. Lanlan Zhang & Xianping Guo, 2008. "Constrained continuous-time Markov decision processes with average criteria," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 67(2), pages 323-340, April.
    3. Xianping Guo & Yi Zhang, 2016. "Optimality of Mixed Policies for Average Continuous-Time Markov Decision Processes with Constraints," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1276-1296, November.
    4. Yonghui Huang & Qingda Wei & Xianping Guo, 2013. "Constrained Markov decision processes with first passage criteria," Annals of Operations Research, Springer, vol. 206(1), pages 197-219, July.
    5. Ping Cao & Jingui Xie, 2016. "Optimal control of a multiclass queueing system when customers can change types," Queueing Systems: Theory and Applications, Springer, vol. 82(3), pages 285-313, April.
    6. Alexey Piunovskiy & Yi Zhang, 2012. "The Transformation Method for Continuous-Time Markov Decision Processes," Journal of Optimization Theory and Applications, Springer, vol. 154(2), pages 691-712, August.
    7. Eugene A. Feinberg & Uriel G. Rothblum, 2012. "Splitting Randomized Stationary Policies in Total-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 37(1), pages 129-153, February.
    8. Xianping Guo & Alexei Piunovskiy, 2011. "Discounted Continuous-Time Markov Decision Processes with Constraints: Unbounded Transition and Loss Rates," Mathematics of Operations Research, INFORMS, vol. 36(1), pages 105-132, February.
    9. Vladimir Ejov & Jerzy A. Filar & Michael Haythorpe & Giang T. Nguyen, 2009. "Refined MDP-Based Branch-and-Fix Algorithm for the Hamiltonian Cycle Problem," Mathematics of Operations Research, INFORMS, vol. 34(3), pages 758-768, August.
    10. Ali Eshragh & Jerzy Filar & Michael Haythorpe, 2011. "A hybrid simulation-optimization algorithm for the Hamiltonian cycle problem," Annals of Operations Research, Springer, vol. 189(1), pages 103-125, September.
    11. Vivek Borkar & Jerzy Filar, 2013. "Markov chains, Hamiltonian cycles and volumes of convex bodies," Journal of Global Optimization, Springer, vol. 55(3), pages 633-639, March.
    12. Ali Eshragh & Jerzy Filar, 2011. "Hamiltonian Cycles, Random Walks, and Discounted Occupational Measures," Mathematics of Operations Research, INFORMS, vol. 36(2), pages 258-270, May.
    13. Jun Fei & Eugene Feinberg, 2013. "Variance minimization for constrained discounted continuous-time MDPs with exponentially distributed stopping times," Annals of Operations Research, Springer, vol. 208(1), pages 433-450, September.
    14. Ali Eshragh & Jerzy A. Filar & Thomas Kalinowski & Sogol Mohammadian, 2020. "Hamiltonian Cycles and Subsets of Discounted Occupational Measures," Mathematics of Operations Research, INFORMS, vol. 45(2), pages 713-731, May.
    15. A. B. Piunovskiy, 2004. "Optimal Interventions in Countable Jump Markov Processes," Mathematics of Operations Research, INFORMS, vol. 29(2), pages 289-308, May.
    16. Subrata Golui & Chandan Pal & Subhamay Saha, 2022. "Continuous-Time Zero-Sum Games for Markov Decision Processes with Discounted Risk-Sensitive Cost Criterion," Dynamic Games and Applications, Springer, vol. 12(2), pages 485-512, June.
    17. Subrata Golui & Chandan Pal, 2022. "Risk-sensitive discounted cost criterion for continuous-time Markov decision processes on a general state space," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 95(2), pages 219-247, April.
    18. Nelly Litvak & Vladimir Ejov, 2009. "Markov Chains and Optimality of the Hamiltonian Cycle," Mathematics of Operations Research, INFORMS, vol. 34(1), pages 71-82, February.
    19. Tomás Prieto-Rumeau & Onésimo Hernández-Lerma, 2016. "Uniform ergodicity of continuous-time controlled Markov chains: A survey and new results," Annals of Operations Research, Springer, vol. 241(1), pages 249-293, June.
    20. Eugene Feinberg, 2005. "On essential information in sequential decision processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 62(3), pages 399-410, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:ejores:v:238:y:2014:i:2:p:486-496. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/eor .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.