IDEAS home Printed from https://ideas.repec.org/a/spr/mathme/v89y2019i1d10.1007_s00186-018-0653-1.html
   My bibliography  Save this article

Computation of weighted sums of rewards for concurrent MDPs

Author

Listed:
  • Peter Buchholz

    (Informatik IV, TU Dortmund)

  • Dimitri Scheftelowitsch

    (Informatik IV, TU Dortmund)

Abstract

We consider sets of Markov decision processes (MDPs) with shared state and action spaces and assume that the individual MDPs in such a set represent different scenarios for a system’s operation. In this setting, we solve the problem of finding a single policy that performs well under each of these scenarios by considering the weighted sum of value vectors for each of the scenarios. Several solution approaches as well as the general complexity of the problem are discussed and algorithms that are based on these solution approaches are presented. Finally, we compare the derived algorithms on a set of benchmark problems.

Suggested Citation

  • Peter Buchholz & Dimitri Scheftelowitsch, 2019. "Computation of weighted sums of rewards for concurrent MDPs," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 89(1), pages 1-42, February.
  • Handle: RePEc:spr:mathme:v:89:y:2019:i:1:d:10.1007_s00186-018-0653-1
    DOI: 10.1007/s00186-018-0653-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s00186-018-0653-1
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s00186-018-0653-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Richard F. Serfozo, 1979. "Technical Note—An Equivalence Between Continuous and Discrete Time Markov Decision Processes," Operations Research, INFORMS, vol. 27(3), pages 616-620, June.
    2. Pedro A. Castillo Castillo & Pedro M. Castro & Vladimir Mahalec, 2018. "Global optimization of MIQCPs with dynamic piecewise relaxations," Journal of Global Optimization, Springer, vol. 71(4), pages 691-716, August.
    3. R. T. Rockafellar & Roger J.-B. Wets, 1991. "Scenarios and Policy Aggregation in Optimization Under Uncertainty," Mathematics of Operations Research, INFORMS, vol. 16(1), pages 119-147, February.
    4. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    5. White, Chelsea C. & White, Douglas J., 1989. "Markov decision processes," European Journal of Operational Research, Elsevier, vol. 39(1), pages 1-16, March.
    6. Dimitris Bertsimas & Velibor V. Mišić, 2017. "Robust Product Line Design," Operations Research, INFORMS, vol. 65(1), pages 19-37, February.
    7. Chelsea C. White & Hany K. Eldeib, 1994. "Markov Decision Processes with Imprecise Transition Probabilities," Operations Research, INFORMS, vol. 42(4), pages 739-749, August.
    8. Christos H. Papadimitriou & John N. Tsitsiklis, 1987. "The Complexity of Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 12(3), pages 441-450, August.
    9. Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
    10. Jitka Dupačová & Giorgio Consigli & Stein Wallace, 2000. "Scenarios for Multistage Stochastic Programs," Annals of Operations Research, Springer, vol. 100(1), pages 25-53, December.
    11. Colvin, Matthew & Maravelias, Christos T., 2010. "Modeling methods and a branch and cut algorithm for pharmaceutical clinical trial planning using stochastic programming," European Journal of Operational Research, Elsevier, vol. 203(1), pages 205-215, May.
    12. Jay K. Satia & Roy E. Lave, 1973. "Markovian Decision Processes with Uncertain Transition Probabilities," Operations Research, INFORMS, vol. 21(3), pages 728-740, June.
    13. Arnab Nilim & Laurent El Ghaoui, 2005. "Robust Control of Markov Decision Processes with Uncertain Transition Matrices," Operations Research, INFORMS, vol. 53(5), pages 780-798, October.
    14. J. K. Satia & R. E. Lave, 1973. "Markovian Decision Processes with Probabilistic Observation of States," Management Science, INFORMS, vol. 20(1), pages 1-13, September.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hossein Hashemi Doulabi & Shabbir Ahmed & George Nemhauser, 2022. "State-Variable Modeling for a Class of Two-Stage Stochastic Optimization Problems," INFORMS Journal on Computing, INFORMS, vol. 34(1), pages 354-369, January.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Andrew J. Keith & Darryl K. Ahner, 2021. "A survey of decision making and optimization under uncertainty," Annals of Operations Research, Springer, vol. 300(2), pages 319-353, May.
    2. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    3. Zeynep Turgay & Fikri Karaesmen & Egemen Lerzan Örmeci, 2018. "Structural properties of a class of robust inventory and queueing control problems," Naval Research Logistics (NRL), John Wiley & Sons, vol. 65(8), pages 699-716, December.
    4. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    5. Zhu, Zhicheng & Xiang, Yisha & Zhao, Ming & Shi, Yue, 2023. "Data-driven remanufacturing planning with parameter uncertainty," European Journal of Operational Research, Elsevier, vol. 309(1), pages 102-116.
    6. Felipe Caro & Aparupa Das Gupta, 2022. "Robust control of the multi-armed bandit problem," Annals of Operations Research, Springer, vol. 317(2), pages 461-480, October.
    7. Varagapriya, V & Singh, Vikas Vikram & Lisser, Abdel, 2024. "Rank-1 transition uncertainties in constrained Markov decision processes," European Journal of Operational Research, Elsevier, vol. 318(1), pages 167-178.
    8. Arthur Flajolet & Sébastien Blandin & Patrick Jaillet, 2018. "Robust Adaptive Routing Under Uncertainty," Operations Research, INFORMS, vol. 66(1), pages 210-229, January.
    9. David L. Kaufman & Andrew J. Schaefer, 2013. "Robust Modified Policy Iteration," INFORMS Journal on Computing, INFORMS, vol. 25(3), pages 396-410, August.
    10. Bren, Austin & Saghafian, Soroush, 2018. "Data-Driven Percentile Optimization for Multi-Class Queueing Systems with Model Ambiguity: Theory and Application," Working Paper Series rwp18-008, Harvard University, John F. Kennedy School of Government.
    11. Erick Delage & Shie Mannor, 2010. "Percentile Optimization for Markov Decision Processes with Parameter Uncertainty," Operations Research, INFORMS, vol. 58(1), pages 203-213, February.
    12. Michael Jong Kim, 2016. "Robust Control of Partially Observable Failing Systems," Operations Research, INFORMS, vol. 64(4), pages 999-1014, August.
    13. Bakker, Hannah & Dunke, Fabian & Nickel, Stefan, 2020. "A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice," Omega, Elsevier, vol. 96(C).
    14. Boloori, Alireza & Saghafian, Soroush & Chakkera, Harini A. A. & Cook, Curtiss B., 2017. "Data-Driven Management of Post-transplant Medications: An APOMDP Approach," Working Paper Series rwp17-036, Harvard University, John F. Kennedy School of Government.
    15. Zeynep Turgay & Fikri Karaesmen & E. Örmeci, 2015. "A dynamic inventory rationing problem with uncertain demand and production rates," Annals of Operations Research, Springer, vol. 231(1), pages 207-228, August.
    16. Schapaugh, Adam W. & Tyre, Andrew J., 2013. "Accounting for parametric uncertainty in Markov decision processes," Ecological Modelling, Elsevier, vol. 254(C), pages 15-21.
    17. Erim Kardeş & Fernando Ordóñez & Randolph W. Hall, 2011. "Discounted Robust Stochastic Games and an Application to Queueing Control," Operations Research, INFORMS, vol. 59(2), pages 365-382, April.
    18. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust decision-making under risk and ambiguity," Papers 2104.12573, arXiv.org, revised Oct 2021.
    19. Rasouli, Mohammad & Saghafian, Soroush, 2018. "Robust Partially Observable Markov Decision Processes," Working Paper Series rwp18-027, Harvard University, John F. Kennedy School of Government.
    20. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2010. "Robust Markov Decision Processes," Working Papers 034, COMISEF.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:mathme:v:89:y:2019:i:1:d:10.1007_s00186-018-0653-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.