IDEAS home Printed from https://ideas.repec.org/a/wly/navres/v66y2019i1p38-56.html
   My bibliography  Save this article

On the reduction of total‐cost and average‐cost MDPs to discounted MDPs

Author

Listed:
  • Eugene A. Feinberg
  • Jefferson Huang

Abstract

This article provides conditions under which total‐cost and average‐cost Markov decision processes (MDPs) can be reduced to discounted ones. Results are given for transient total‐cost MDPs with transition rates whose values may be greater than one, as well as for average‐cost MDPs with transition probabilities satisfying the condition that there is a state such that the expected time to reach it is uniformly bounded for all initial states and stationary policies. In particular, these reductions imply sufficient conditions for the validity of optimality equations and the existence of stationary optimal policies for MDPs with undiscounted total cost and average‐cost criteria. When the state and action sets are finite, these reductions lead to linear programming formulations and complexity estimates for MDPs under the aforementioned criteria.© 2017 Wiley Periodicals, Inc. Naval Research Logistics 66:38–56, 2019

Suggested Citation

  • Eugene A. Feinberg & Jefferson Huang, 2019. "On the reduction of total‐cost and average‐cost MDPs to discounted MDPs," Naval Research Logistics (NRL), John Wiley & Sons, vol. 66(1), pages 38-56, February.
  • Handle: RePEc:wly:navres:v:66:y:2019:i:1:p:38-56
    DOI: 10.1002/nav.21743
    as

    Download full text from publisher

    File URL: https://doi.org/10.1002/nav.21743
    Download Restriction: no

    File URL: https://libkey.io/10.1002/nav.21743?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. B. Curtis Eaves & Arthur F. Veinott, 2014. "Maximum-Stopping-Value Policies in Finite Markov Population Decision Chains," Mathematics of Operations Research, INFORMS, vol. 39(3), pages 597-606, August.
    2. Eugene A. Feinberg & Uriel G. Rothblum, 2012. "Splitting Randomized Stationary Policies in Total-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 37(1), pages 129-153, February.
    3. Rommert Dekker & Arie Hordijk, 1992. "Recurrence Conditions for Average and Blackwell Optimality in Denumerable State Markov Decision Chains," Mathematics of Operations Research, INFORMS, vol. 17(2), pages 271-289, May.
    4. Bruno Scherrer, 2016. "Improved and Generalized Upper Bounds on the Complexity of Policy Iteration," Mathematics of Operations Research, INFORMS, vol. 41(3), pages 758-774, August.
    5. Yinyu Ye, 2011. "The Simplex and Policy-Iteration Methods Are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate," Mathematics of Operations Research, INFORMS, vol. 36(4), pages 593-603, November.
    6. Uriel G. Rothblum & Peter Whittle, 1982. "Growth Optimality for Branching Markov Decision Chains," Mathematics of Operations Research, INFORMS, vol. 7(4), pages 582-601, November.
    7. K. Hinderer & K.-H. Waldmann, 2003. "The critical discount factor for finite Markovian decision processes with an absorbing set," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 57(1), pages 1-19, April.
    8. Eugene A. Feinberg & Pavlo O. Kasyanov & Nina V. Zadoianchuk, 2012. "Average Cost Markov Decision Processes with Weakly Continuous Transition Probabilities," Mathematics of Operations Research, INFORMS, vol. 37(4), pages 591-607, November.
    9. Alexander Zadorojniy & Guy Even & Adam Shwartz, 2009. "A Strongly Polynomial Algorithm for Controlled Queues," Mathematics of Operations Research, INFORMS, vol. 34(4), pages 992-1007, November.
    10. R. Dekker & A. Hordijk & F. M. Spieksma, 1994. "On the Relation Between Recurrence and Ergodicity Properties in Denumerable Markov Decision Chains," Mathematics of Operations Research, INFORMS, vol. 19(3), pages 539-559, August.
    11. Stanley R. Pliska, 1976. "Optimization of Multitype Branching Processes," Management Science, INFORMS, vol. 23(2), pages 117-124, October.
    12. Uriel G. Rothblum, 1975. "Normalized Markov Decision Chains I; Sensitive Discount Optimality," Operations Research, INFORMS, vol. 23(4), pages 785-795, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Dwi Ertiningsih & Sandjai Bhulai & Flora Spieksma, 2018. "A novel use of value iteration for deriving bounds for threshold and switching curve optimal policies," Naval Research Logistics (NRL), John Wiley & Sons, vol. 65(8), pages 638-659, December.
    2. Nicole Leder & Bernd Heidergott & Arie Hordijk, 2010. "An Approximation Approach for the Deviation Matrix of Continuous-Time Markov Processes with Application to Markov Decision Theory," Operations Research, INFORMS, vol. 58(4-part-1), pages 918-932, August.
    3. Yu Zhang & Vidyadhar G. Kulkarni, 2017. "Two-day appointment scheduling with patient preferences and geometric arrivals," Queueing Systems: Theory and Applications, Springer, vol. 85(1), pages 173-209, February.
    4. Xianping Guo & Yi Zhang, 2016. "Optimality of Mixed Policies for Average Continuous-Time Markov Decision Processes with Constraints," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1276-1296, November.
    5. José Niño-Mora, 2022. "Multi-Gear Bandits, Partial Conservation Laws, and Indexability," Mathematics, MDPI, vol. 10(14), pages 1-31, July.
    6. Guy Even & Alexander Zadorojniy, 2012. "Strong polynomiality of the Gass-Saaty shadow-vertex pivoting rule for controlled random walks," Annals of Operations Research, Springer, vol. 201(1), pages 159-167, December.
    7. Kousha Etessami & Alistair Stewart & Mihalis Yannakakis, 2020. "Polynomial Time Algorithms for Branching Markov Decision Processes and Probabilistic Min(Max) Polynomial Bellman Equations," Mathematics of Operations Research, INFORMS, vol. 45(1), pages 34-62, February.
    8. Arnab Basu & Mrinal K. Ghosh, 2018. "Nonzero-Sum Risk-Sensitive Stochastic Games on a Countable State Space," Mathematics of Operations Research, INFORMS, vol. 43(2), pages 516-532, May.
    9. F. M. Spieksma, 2016. "Kolmogorov forward equation and explosiveness in countable state Markov processes," Annals of Operations Research, Springer, vol. 241(1), pages 3-22, June.
    10. Ilbin Lee & Marina A. Epelman & H. Edwin Romeijn & Robert L. Smith, 2017. "Simplex Algorithm for Countable-State Discounted Markov Decision Processes," Operations Research, INFORMS, vol. 65(4), pages 1029-1042, August.
    11. David B. Brown & James E. Smith, 2013. "Optimal Sequential Exploration: Bandits, Clairvoyants, and Wildcats," Operations Research, INFORMS, vol. 61(3), pages 644-665, June.
    12. Armando F. Mendoza-Pérez & Héctor Jasso-Fuentes & Omar A. De-la-Cruz Courtois, 2016. "Constrained Markov decision processes in Borel spaces: from discounted to average optimality," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 84(3), pages 489-525, December.
    13. Bernd Heidergott & Arie Hordijk & Heinz Weisshaupt, 2006. "Measure-Valued Differentiation for Stationary Markov Chains," Mathematics of Operations Research, INFORMS, vol. 31(1), pages 154-172, February.
    14. Fabio Vitor & Todd Easton, 2018. "The double pivot simplex method," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 87(1), pages 109-137, February.
    15. Eugene A. Feinberg & Yan Liang, 2022. "Structure of optimal policies to periodic-review inventory models with convex costs and backorders for all values of discount factors," Annals of Operations Research, Springer, vol. 317(1), pages 29-45, October.
    16. Urmee Khan & Maxwell B Stinchcombe, 2016. "Planning for the Long Run: Programming with Patient, Pareto Responsive Preferences," Working Papers 201608, University of California at Riverside, Department of Economics.
    17. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    18. Isaac M. Sonin & Constantine Steinberg, 2016. "Continue, quit, restart probability model," Annals of Operations Research, Springer, vol. 241(1), pages 295-318, June.
    19. Daniel Hernández Hernández & Diego Hernández Bustos, 2017. "Local Poisson Equations Associated with Discrete-Time Markov Control Processes," Journal of Optimization Theory and Applications, Springer, vol. 173(1), pages 1-29, April.
    20. Daniel Adelman & Christiane Barz, 2014. "A Unifying Approximate Dynamic Programming Model for the Economic Lot Scheduling Problem," Mathematics of Operations Research, INFORMS, vol. 39(2), pages 374-402, May.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wly:navres:v:66:y:2019:i:1:p:38-56. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: https://doi.org/10.1002/(ISSN)1520-6750 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.