IDEAS home Printed from https://ideas.repec.org/a/vrs/foeste/v23y2023i1p1-15n12.html
   My bibliography  Save this article

Solving Finite-Horizon Discounted Non-Stationary MDPS

Author

Listed:
  • Bouchra El Akraoui

    (Slimane University, Laboratory of Information Processing and Decision Support, Sultan Moulay, Morocco.)

  • Daoui Cherki

    (Slimane University, Laboratory of Information Processing and Decision Support, Sultan Moulay, Morocco.)

Abstract

Research background Markov Decision Processes (MDPs) are a powerful framework for modeling many real-world problems with finite-horizons that maximize the reward given a sequence of actions. Although many problems such as investment and financial market problems where the value of a reward decreases exponentially with time, require the introduction of interest rates. Purpose This study investigates non-stationary finite-horizon MDPs with a discount factor to account for fluctuations in rewards over time. Research methodology To consider the fluctuations of rewards with time, the authors define new nonstationary finite-horizon MDPs with a discount factor. First, the existence of an optimal policy for the proposed finite-horizon discounted MDPs is proven. Next, a new Discounted Backward Induction (DBI) algorithm is presented to find it. To enhance the value of their proposal, a financial model is used as an example of a finite-horizon discounted MDP and an adaptive DBI algorithm is used to solve it. Results The proposed method calculates the optimal values of the investment to maximize its expected total return with consideration of the time value of money. Novelty No existing studies have before examined dynamic finite-horizon problems that account for temporal fluctuations in rewards.

Suggested Citation

  • Bouchra El Akraoui & Daoui Cherki, 2023. "Solving Finite-Horizon Discounted Non-Stationary MDPS," Folia Oeconomica Stetinensia, Sciendo, vol. 23(1), pages 1-15, June.
  • Handle: RePEc:vrs:foeste:v:23:y:2023:i:1:p:1-15:n:12
    DOI: 10.2478/foli-2023-0001
    as

    Download full text from publisher

    File URL: https://doi.org/10.2478/foli-2023-0001
    Download Restriction: no

    File URL: https://libkey.io/10.2478/foli-2023-0001?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. White, Chelsea C. & White, Douglas J., 1989. "Markov decision processes," European Journal of Operational Research, Elsevier, vol. 39(1), pages 1-16, March.
    2. Dimitris Bertsimas & Velibor V. Mišić, 2016. "Decomposable Markov Decision Processes: A Fluid Optimization Approach," Operations Research, INFORMS, vol. 64(6), pages 1537-1555, December.
    3. Bouchra el Akraoui & Cherki Daoui & Abdelhadi Larach & khalid Rahhali & Efthymios G. Tsionas, 2022. "Decomposition Methods for Solving Finite-Horizon Large MDPs," Journal of Mathematics, Hindawi, vol. 2022, pages 1-8, August.
    4. Yinyu Ye, 2011. "The Simplex and Policy-Iteration Methods Are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate," Mathematics of Operations Research, INFORMS, vol. 36(4), pages 593-603, November.
    5. Huijie Peng & Yan Cheng & Xingyuan Li, 2023. "Real-Time Pricing Method for Spot Cloud Services with Non-Stationary Excess Capacity," Sustainability, MDPI, vol. 15(4), pages 1-21, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Eike Nohdurft & Elisa Long & Stefan Spinler, 2017. "Was Angelina Jolie Right? Optimizing Cancer Prevention Strategies Among BRCA Mutation Carriers," Decision Analysis, INFORMS, vol. 14(3), pages 139-169, September.
    2. Deligiannis, Michalis & Liberopoulos, George, 2023. "Dynamic ordering and buyer selection policies when service affects future demand," Omega, Elsevier, vol. 118(C).
    3. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    4. Ilbin Lee & Marina A. Epelman & H. Edwin Romeijn & Robert L. Smith, 2017. "Simplex Algorithm for Countable-State Discounted Markov Decision Processes," Operations Research, INFORMS, vol. 65(4), pages 1029-1042, August.
    5. David B. Brown & James E. Smith, 2013. "Optimal Sequential Exploration: Bandits, Clairvoyants, and Wildcats," Operations Research, INFORMS, vol. 61(3), pages 644-665, June.
    6. Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
    7. Yanling Chang & Alan Erera & Chelsea White, 2015. "A leader–follower partially observed, multiobjective Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 103-128, December.
    8. Mengdi Wang, 2020. "Randomized Linear Programming Solves the Markov Decision Problem in Nearly Linear (Sometimes Sublinear) Time," Mathematics of Operations Research, INFORMS, vol. 45(2), pages 517-546, May.
    9. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    10. Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
    11. Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
    12. Serin, Yasemin, 1995. "A nonlinear programming model for partially observable Markov decision processes: Finite horizon case," European Journal of Operational Research, Elsevier, vol. 86(3), pages 549-564, November.
    13. Cerqueti, Roy & Falbo, Paolo & Pelizzari, Cristian, 2017. "Relevant states and memory in Markov chain bootstrapping and simulation," European Journal of Operational Research, Elsevier, vol. 256(1), pages 163-177.
    14. Kao, Jih-Forg, 1995. "Optimal recovery strategies for manufacturing systems," European Journal of Operational Research, Elsevier, vol. 80(2), pages 252-263, January.
    15. Jesús Loera, 2013. "Comments on: Recent progress on the combinatorial diameter of polytopes and simplicial complexes," TOP: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 21(3), pages 474-481, October.
    16. Fabio Vitor & Todd Easton, 2018. "The double pivot simplex method," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 87(1), pages 109-137, February.
    17. José Niño-Mora, 2022. "Multi-Gear Bandits, Partial Conservation Laws, and Indexability," Mathematics, MDPI, vol. 10(14), pages 1-31, July.
    18. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    19. Declan Mungovan & Enda Howley & Jim Duggan, 2011. "The influence of random interactions and decision heuristics on norm evolution in social networks," Computational and Mathematical Organization Theory, Springer, vol. 17(2), pages 152-178, May.
    20. Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.

    More about this item

    Keywords

    Markov Decision Process; Dynamic Programming; Backward Induction algorithm;
    All these keywords.

    JEL classification:

    • C02 - Mathematical and Quantitative Methods - - General - - - Mathematical Economics
    • G11 - Financial Economics - - General Financial Markets - - - Portfolio Choice; Investment Decisions
    • C44 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics - - - Operations Research; Statistical Decision Theory

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:vrs:foeste:v:23:y:2023:i:1:p:1-15:n:12. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Peter Golla (email available below). General contact details of provider: https://www.sciendo.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.