My bibliography
Save this item
Markov decision processes
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Abhijit Gosavi, 2009. "Reinforcement Learning: A Tutorial Survey and Recent Advances," INFORMS Journal on Computing, INFORMS, vol. 21(2), pages 178-192, May.
- James T. Treharne & Charles R. Sox, 2002. "Adaptive Inventory Control for Nonstationary Demand and Partial Information," Management Science, INFORMS, vol. 48(5), pages 607-624, May.
- Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
- Nicola Secomandi & François Margot, 2009. "Reoptimization Approaches for the Vehicle-Routing Problem with Stochastic Demands," Operations Research, INFORMS, vol. 57(1), pages 214-230, February.
- Eike Nohdurft & Elisa Long & Stefan Spinler, 2017. "Was Angelina Jolie Right? Optimizing Cancer Prevention Strategies Among BRCA Mutation Carriers," Decision Analysis, INFORMS, vol. 14(3), pages 139-169, September.
- Declan Mungovan & Enda Howley & Jim Duggan, 2011. "The influence of random interactions and decision heuristics on norm evolution in social networks," Computational and Mathematical Organization Theory, Springer, vol. 17(2), pages 152-178, May.
- Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
- Serin, Yasemin, 1995. "A nonlinear programming model for partially observable Markov decision processes: Finite horizon case," European Journal of Operational Research, Elsevier, vol. 86(3), pages 549-564, November.
- Bouchra El Akraoui & Daoui Cherki, 2023. "Solving Finite-Horizon Discounted Non-Stationary MDPS," Folia Oeconomica Stetinensia, Sciendo, vol. 23(1), pages 1-15, June.
- Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
- Ossai, Chinedu I. & Boswell, Brian & Davies, Ian J., 2016. "A Markovian approach for modelling the effects of maintenance on downtime and failure risk of wind turbine components," Renewable Energy, Elsevier, vol. 96(PA), pages 775-783.
- Andreatta, G. & Lulli, G., 2008. "A multi-period TSP with stochastic regular and urgent demands," European Journal of Operational Research, Elsevier, vol. 185(1), pages 122-132, February.
- Peter Buchholz & Dimitri Scheftelowitsch, 2019. "Computation of weighted sums of rewards for concurrent MDPs," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 89(1), pages 1-42, February.
- Shoshana Anily & Abraham Grosfeld-Nir, 2006. "An Optimal Lot-Sizing and Offline Inspection Policy in the Case of Nonrigid Demand," Operations Research, INFORMS, vol. 54(2), pages 311-323, April.
- Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
- Touzani, Samir & Prakash, Anand Krishnan & Wang, Zhe & Agarwal, Shreya & Pritoni, Marco & Kiran, Mariam & Brown, Richard & Granderson, Jessica, 2021. "Controlling distributed energy resources via deep reinforcement learning for load flexibility and energy efficiency," Applied Energy, Elsevier, vol. 304(C).
- Abraham Grosfeld‐Nir & Eyal Cohen & Yigal Gerchak, 2007. "Production to order and off‐line inspection when the production process is partially observable," Naval Research Logistics (NRL), John Wiley & Sons, vol. 54(8), pages 845-858, December.
- Yates, C. M. & Rehman, T. & Chamberlain, A. T., 1996. "Evaluation of the potential effects of embryo transfer on milk production on commercial dairy herds: The development of a markov chain model," Agricultural Systems, Elsevier, vol. 50(1), pages 65-79.
- Jianxun Luo & Wei Zhang & Hui Wang & Wenmiao Wei & Jinpeng He, 2023. "Research on Data-Driven Optimal Scheduling of Power System," Energies, MDPI, vol. 16(6), pages 1-15, March.
- Yossi Aviv & Amit Pazgal, 2005. "A Partially Observed Markov Decision Process for Dynamic Pricing," Management Science, INFORMS, vol. 51(9), pages 1400-1416, September.
- Cerqueti, Roy & Falbo, Paolo & Pelizzari, Cristian, 2017.
"Relevant states and memory in Markov chain bootstrapping and simulation,"
European Journal of Operational Research, Elsevier, vol. 256(1), pages 163-177.
- Cerqueti, Roy & Falbo, Paolo & Pelizzari, Cristian, 2013. "Relevant States and Memory in Markov Chain Bootstrapping and Simulation," MPRA Paper 46250, University Library of Munich, Germany.
- Yates, C.M. & Rehman, T., 1998. "A linear programming formulation of the Markovian decision process approach to modelling the dairy replacement problem," Agricultural Systems, Elsevier, vol. 58(2), pages 185-201, October.
- Stephen M. Gilbert & Hena M Bar, 1999. "The value of observing the condition of a deteriorating machine," Naval Research Logistics (NRL), John Wiley & Sons, vol. 46(7), pages 790-808, October.
- Kao, Jih-Forg, 1995. "Optimal recovery strategies for manufacturing systems," European Journal of Operational Research, Elsevier, vol. 80(2), pages 252-263, January.
- Jang, Wooseung & Shanthikumar, J. George, 2004. "Sequential process control under capacity constraints," European Journal of Operational Research, Elsevier, vol. 155(3), pages 695-714, June.
- Yanling Chang & Alan Erera & Chelsea White, 2015. "A leader–follower partially observed, multiobjective Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 103-128, December.