IDEAS home Printed from https://ideas.repec.org/a/vrs/foeste/v23y2023i1p1-15n12.html
   My bibliography  Save this article

Solving Finite-Horizon Discounted Non-Stationary MDPS

Author

Listed:
  • Bouchra El Akraoui

    (Slimane University, Laboratory of Information Processing and Decision Support, Sultan Moulay, Morocco.)

  • Daoui Cherki

    (Slimane University, Laboratory of Information Processing and Decision Support, Sultan Moulay, Morocco.)

Abstract

Research background Markov Decision Processes (MDPs) are a powerful framework for modeling many real-world problems with finite-horizons that maximize the reward given a sequence of actions. Although many problems such as investment and financial market problems where the value of a reward decreases exponentially with time, require the introduction of interest rates. Purpose This study investigates non-stationary finite-horizon MDPs with a discount factor to account for fluctuations in rewards over time. Research methodology To consider the fluctuations of rewards with time, the authors define new nonstationary finite-horizon MDPs with a discount factor. First, the existence of an optimal policy for the proposed finite-horizon discounted MDPs is proven. Next, a new Discounted Backward Induction (DBI) algorithm is presented to find it. To enhance the value of their proposal, a financial model is used as an example of a finite-horizon discounted MDP and an adaptive DBI algorithm is used to solve it. Results The proposed method calculates the optimal values of the investment to maximize its expected total return with consideration of the time value of money. Novelty No existing studies have before examined dynamic finite-horizon problems that account for temporal fluctuations in rewards.

Suggested Citation

  • Bouchra El Akraoui & Daoui Cherki, 2023. "Solving Finite-Horizon Discounted Non-Stationary MDPS," Folia Oeconomica Stetinensia, Sciendo, vol. 23(1), pages 1-15, June.
  • Handle: RePEc:vrs:foeste:v:23:y:2023:i:1:p:1-15:n:12
    DOI: 10.2478/foli-2023-0001
    as

    Download full text from publisher

    File URL: https://doi.org/10.2478/foli-2023-0001
    Download Restriction: no

    File URL: https://libkey.io/10.2478/foli-2023-0001?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. White, Chelsea C. & White, Douglas J., 1989. "Markov decision processes," European Journal of Operational Research, Elsevier, vol. 39(1), pages 1-16, March.
    2. Dimitris Bertsimas & Velibor V. Mišić, 2016. "Decomposable Markov Decision Processes: A Fluid Optimization Approach," Operations Research, INFORMS, vol. 64(6), pages 1537-1555, December.
    3. Bouchra el Akraoui & Cherki Daoui & Abdelhadi Larach & khalid Rahhali & Efthymios G. Tsionas, 2022. "Decomposition Methods for Solving Finite-Horizon Large MDPs," Journal of Mathematics, Hindawi, vol. 2022, pages 1-8, August.
    4. Yinyu Ye, 2011. "The Simplex and Policy-Iteration Methods Are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate," Mathematics of Operations Research, INFORMS, vol. 36(4), pages 593-603, November.
    5. Huijie Peng & Yan Cheng & Xingyuan Li, 2023. "Real-Time Pricing Method for Spot Cloud Services with Non-Stationary Excess Capacity," Sustainability, MDPI, vol. 15(4), pages 1-21, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Deligiannis, Michalis & Liberopoulos, George, 2023. "Dynamic ordering and buyer selection policies when service affects future demand," Omega, Elsevier, vol. 118(C).
    2. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    3. Ilbin Lee & Marina A. Epelman & H. Edwin Romeijn & Robert L. Smith, 2017. "Simplex Algorithm for Countable-State Discounted Markov Decision Processes," Operations Research, INFORMS, vol. 65(4), pages 1029-1042, August.
    4. David B. Brown & James E. Smith, 2013. "Optimal Sequential Exploration: Bandits, Clairvoyants, and Wildcats," Operations Research, INFORMS, vol. 61(3), pages 644-665, June.
    5. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    6. Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
    7. Kao, Jih-Forg, 1995. "Optimal recovery strategies for manufacturing systems," European Journal of Operational Research, Elsevier, vol. 80(2), pages 252-263, January.
    8. Fabio Vitor & Todd Easton, 2018. "The double pivot simplex method," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 87(1), pages 109-137, February.
    9. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    10. Declan Mungovan & Enda Howley & Jim Duggan, 2011. "The influence of random interactions and decision heuristics on norm evolution in social networks," Computational and Mathematical Organization Theory, Springer, vol. 17(2), pages 152-178, May.
    11. Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.
    12. Isaac M. Sonin & Constantine Steinberg, 2016. "Continue, quit, restart probability model," Annals of Operations Research, Springer, vol. 241(1), pages 295-318, June.
    13. Yates, C.M. & Rehman, T., 1998. "A linear programming formulation of the Markovian decision process approach to modelling the dairy replacement problem," Agricultural Systems, Elsevier, vol. 58(2), pages 185-201, October.
    14. Jang, Wooseung & Shanthikumar, J. George, 2004. "Sequential process control under capacity constraints," European Journal of Operational Research, Elsevier, vol. 155(3), pages 695-714, June.
    15. Eugene A. Feinberg & Jefferson Huang, 2019. "On the reduction of total‐cost and average‐cost MDPs to discounted MDPs," Naval Research Logistics (NRL), John Wiley & Sons, vol. 66(1), pages 38-56, February.
    16. Shoshana Anily & Abraham Grosfeld-Nir, 2006. "An Optimal Lot-Sizing and Offline Inspection Policy in the Case of Nonrigid Demand," Operations Research, INFORMS, vol. 54(2), pages 311-323, April.
    17. Nicola Secomandi & François Margot, 2009. "Reoptimization Approaches for the Vehicle-Routing Problem with Stochastic Demands," Operations Research, INFORMS, vol. 57(1), pages 214-230, February.
    18. Yates, C. M. & Rehman, T. & Chamberlain, A. T., 1996. "Evaluation of the potential effects of embryo transfer on milk production on commercial dairy herds: The development of a markov chain model," Agricultural Systems, Elsevier, vol. 50(1), pages 65-79.
    19. Stephen M. Gilbert & Hena M Bar, 1999. "The value of observing the condition of a deteriorating machine," Naval Research Logistics (NRL), John Wiley & Sons, vol. 46(7), pages 790-808, October.
    20. Guillot, Matthieu & Stauffer, Gautier, 2020. "The Stochastic Shortest Path Problem: A polyhedral combinatorics perspective," European Journal of Operational Research, Elsevier, vol. 285(1), pages 148-158.

    More about this item

    Keywords

    Markov Decision Process; Dynamic Programming; Backward Induction algorithm;
    All these keywords.

    JEL classification:

    • C02 - Mathematical and Quantitative Methods - - General - - - Mathematical Economics
    • G11 - Financial Economics - - General Financial Markets - - - Portfolio Choice; Investment Decisions
    • C44 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics - - - Operations Research; Statistical Decision Theory

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:vrs:foeste:v:23:y:2023:i:1:p:1-15:n:12. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Peter Golla (email available below). General contact details of provider: https://www.sciendo.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.