IDEAS home Printed from https://ideas.repec.org/a/inm/oropre/v62y2014i4p864-875.html
   My bibliography  Save this article

Markov Decision Problems Where Means Bound Variances

Author

Listed:
  • Alessandro Arlotto

    (The Fuqua School of Business, Duke University, Durham, North Carolina, 27708)

  • Noah Gans

    (Operations and Information Management Department, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania, 19104)

  • J. Michael Steele

    (Statistics Department, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania, 19104)

Abstract

We identify a rich class of finite-horizon Markov decision problems (MDPs) for which the variance of the optimal total reward can be bounded by a simple linear function of its expected value. The class is characterized by three natural properties: reward nonnegativity and boundedness , existence of a do-nothing action , and optimal action monotonicity . These properties are commonly present and typically easy to check. Implications of the class properties and of the variance bound are illustrated by examples of MDPs from operations research, operations management, financial engineering, and combinatorial optimization.

Suggested Citation

  • Alessandro Arlotto & Noah Gans & J. Michael Steele, 2014. "Markov Decision Problems Where Means Bound Variances," Operations Research, INFORMS, vol. 62(4), pages 864-875, August.
  • Handle: RePEc:inm:oropre:v:62:y:2014:i:4:p:864-875
    DOI: 10.1287/opre.2014.1281
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/opre.2014.1281
    Download Restriction: no

    File URL: https://libkey.io/10.1287/opre.2014.1281?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Bruss, F. Thomas & Delbaen, Freddy, 2004. "A central limit theorem for the optimal selection process for monotone subsequences of maximum expected length," Stochastic Processes and their Applications, Elsevier, vol. 114(2), pages 287-311, December.
    2. Kalyan Talluri & Garrett van Ryzin, 1998. "An Analysis of Bid-Price Controls for Network Revenue Management," Management Science, INFORMS, vol. 44(11-Part-1), pages 1577-1593, November.
    3. Jason D. Papastavrou & Srikanth Rajagopalan & Anton J. Kleywegt, 1996. "The Dynamic and Stochastic Knapsack Problem with Deadlines," Management Science, INFORMS, vol. 42(12), pages 1706-1718, December.
    4. Ying Huang & L. C. M. Kallenberg, 1994. "On Finding Optimal Policies for Markov Decision Chains: A Unifying Framework for Mean-Variance-Tradeoffs," Mathematics of Operations Research, INFORMS, vol. 19(2), pages 434-448, May.
    5. Carri W. Chan & Vivek F. Farias, 2009. "Stochastic Depletion Problems: Effective Myopic Policies for a Class of Dynamic Optimization Problems," Mathematics of Operations Research, INFORMS, vol. 34(2), pages 333-350, May.
    6. Mannor, Shie & Tsitsiklis, John N., 2013. "Algorithmic aspects of mean–variance optimization in Markov decision processes," European Journal of Operational Research, Elsevier, vol. 231(3), pages 645-653.
    7. David B. Brown & James E. Smith & Peng Sun, 2010. "Information Relaxations and Duality in Stochastic Dynamic Programs," Operations Research, INFORMS, vol. 58(4-part-1), pages 785-801, August.
    8. Bruss, F. Thomas & Delbaen, Freddy, 2001. "Optimal rules for the sequential selection of monotone subsequences of maximum expected length," Stochastic Processes and their Applications, Elsevier, vol. 96(2), pages 313-342, December.
    9. James, Barry & James, Kang & Qi, Yongcheng, 2008. "Limit theorems for correlated Bernoulli random variables," Statistics & Probability Letters, Elsevier, vol. 78(15), pages 2339-2345, October.
    10. C. Barz & K. Waldmann, 2007. "Risk-sensitive capacity control in revenue management," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 65(3), pages 565-579, June.
    11. Gregory P. Prastacos, 1983. "Optimal Sequential Investment Decisions Under Conditions of Uncertainty," Management Science, INFORMS, vol. 29(1), pages 118-134, January.
    12. Kun-Jen Chung, 1994. "Mean-Variance Tradeoffs in an Undiscounted MDP: The Unichain Case," Operations Research, INFORMS, vol. 42(1), pages 184-188, February.
    13. Kawai, Hajime, 1987. "A variance minimization problem for a Markov decision process," European Journal of Operational Research, Elsevier, vol. 31(1), pages 140-145, July.
    14. Jerzy A. Filar & L. C. M. Kallenberg & Huey-Miin Lee, 1989. "Variance-Penalized Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 14(1), pages 147-161, February.
    15. C. Derman & G. J. Lieberman & S. M. Ross, 1975. "A Stochastic Sequential Allocation Model," Operations Research, INFORMS, vol. 23(6), pages 1120-1130, December.
    16. Melike Baykal-Gürsoy & Keith W. Ross, 1992. "Variability Sensitive Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 17(3), pages 558-571, August.
    17. Matthew J. Sobel, 1994. "Mean-Variance Tradeoffs in an Undiscounted MDP," Operations Research, INFORMS, vol. 42(1), pages 175-183, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Alessandro Arlotto & J. Michael Steele, 2018. "A Central Limit Theorem for Costs in Bulinskaya’s Inventory Management Problem When Deliveries Face Delays," Methodology and Computing in Applied Probability, Springer, vol. 20(3), pages 839-854, September.
    2. Ilya O. Ryzhov & Martijn R. K. Mes & Warren B. Powell & Gerald van den Berg, 2019. "Bayesian Exploration for Approximate Dynamic Programming," Operations Research, INFORMS, vol. 67(1), pages 198-214, January.
    3. Arlotto, Alessandro & Nguyen, Vinh V. & Steele, J. Michael, 2015. "Optimal online selection of a monotone subsequence: a central limit theorem," Stochastic Processes and their Applications, Elsevier, vol. 125(9), pages 3596-3622.
    4. Alessandro Arlotto & J. Michael Steele, 2016. "A Central Limit Theorem for Temporally Nonhomogenous Markov Chains with Applications to Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1448-1468, November.
    5. Jingnan Fan & Andrzej Ruszczynski, 2014. "Process-Based Risk Measures and Risk-Averse Control of Discrete-Time Systems," Papers 1411.2675, arXiv.org, revised Nov 2016.
    6. Jingnan Fan & Andrzej Ruszczyński, 2018. "Risk measurement and risk-averse control of partially observable discrete-time Markov systems," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 88(2), pages 161-184, October.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Li Xia, 2020. "Risk‐Sensitive Markov Decision Processes with Combined Metrics of Mean and Variance," Production and Operations Management, Production and Operations Management Society, vol. 29(12), pages 2808-2827, December.
    2. Karel Sladký, 2005. "On mean reward variance in semi-Markov processes," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 62(3), pages 387-397, December.
    3. Ma, Shuai & Ma, Xiaoteng & Xia, Li, 2023. "A unified algorithm framework for mean-variance optimization in discounted Markov decision processes," European Journal of Operational Research, Elsevier, vol. 311(3), pages 1057-1067.
    4. Karel Sladký, 2013. "Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes," Czech Economic Review, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, vol. 7(3), pages 146-161, November.
    5. Santiago R. Balseiro & David B. Brown, 2019. "Approximations to Stochastic Dynamic Programs via Information Relaxation Duality," Operations Research, INFORMS, vol. 67(2), pages 577-597, March.
    6. Dmitry Krass & O. J. Vrieze, 2002. "Achieving Target State-Action Frequencies in Multichain Average-Reward Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 27(3), pages 545-566, August.
    7. Yuanzheng Ma & Tong Wang & Huan Zheng, 2023. "On fairness and efficiency in nonprofit operations: Dynamic resource allocations," Production and Operations Management, Production and Operations Management Society, vol. 32(6), pages 1778-1792, June.
    8. Yuhang Ma & Paat Rusmevichientong & Mika Sumida & Huseyin Topaloglu, 2020. "An Approximation Algorithm for Network Revenue Management Under Nonstationary Arrivals," Operations Research, INFORMS, vol. 68(3), pages 834-855, May.
    9. Jingnan Fan & Andrzej Ruszczyński, 2018. "Risk measurement and risk-averse control of partially observable discrete-time Markov systems," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 88(2), pages 161-184, October.
    10. Gnedin, Alexander & Seksenbayev, Amirlan, 2021. "Diffusion approximations in the online increasing subsequence problem," Stochastic Processes and their Applications, Elsevier, vol. 139(C), pages 298-320.
    11. Pak, K. & Dekker, R., 2004. "Cargo Revenue Management: Bid-Prices for a 0-1 Multi Knapsack Problem," ERIM Report Series Research in Management ERS-2004-055-LIS, Erasmus Research Institute of Management (ERIM), ERIM is the joint research institute of the Rotterdam School of Management, Erasmus University and the Erasmus School of Economics (ESE) at Erasmus University Rotterdam.
    12. Michael Jong Kim, 2016. "Robust Control of Partially Observable Failing Systems," Operations Research, INFORMS, vol. 64(4), pages 999-1014, August.
    13. Sebastian Koch & Jochen Gönsch & Michael Hassler & Robert Klein, 2016. "Practical decision rules for risk-averse revenue management using simulation-based optimization," Journal of Revenue and Pricing Management, Palgrave Macmillan, vol. 15(6), pages 468-487, December.
    14. Xuhan Tian & Junmin (Jim) Shi & Xiangtong Qi, 2022. "Stochastic Sequential Allocations for Creative Crowdsourcing," Production and Operations Management, Production and Operations Management Society, vol. 31(2), pages 697-714, February.
    15. Alexander G. Nikolaev & Sheldon H. Jacobson, 2010. "Technical Note ---Stochastic Sequential Decision-Making with a Random Number of Jobs," Operations Research, INFORMS, vol. 58(4-part-1), pages 1023-1027, August.
    16. Anton J. Kleywegt & Jason D. Papastavrou, 2001. "The Dynamic and Stochastic Knapsack Problem with Random Sized Items," Operations Research, INFORMS, vol. 49(1), pages 26-41, February.
    17. Grace Y. Lin & Yingdong Lu & David D. Yao, 2008. "The Stochastic Knapsack Revisited: Switch-Over Policies and Dynamic Pricing," Operations Research, INFORMS, vol. 56(4), pages 945-957, August.
    18. Jingnan Fan & Andrzej Ruszczynski, 2014. "Process-Based Risk Measures and Risk-Averse Control of Discrete-Time Systems," Papers 1411.2675, arXiv.org, revised Nov 2016.
    19. Chiang, David Ming-Huang & Wu, Andy Wei-Di, 2011. "Discrete-order admission ATP model with joint effect of margin and order size in a MTO environment," International Journal of Production Economics, Elsevier, vol. 133(2), pages 761-775, October.
    20. Arlotto, Alessandro & Nguyen, Vinh V. & Steele, J. Michael, 2015. "Optimal online selection of a monotone subsequence: a central limit theorem," Stochastic Processes and their Applications, Elsevier, vol. 125(9), pages 3596-3622.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:oropre:v:62:y:2014:i:4:p:864-875. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.