A leader–follower partially observed, multiobjective Markov game
Author
Abstract
Suggested Citation
DOI: 10.1007/s10479-015-1935-0
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
References listed on IDEAS
- Vicki Bier & Santiago Oliveros & Larry Samuelson, 2007.
"Choosing What to Protect: Strategic Defensive Allocation against an Unknown Attacker,"
Journal of Public Economic Theory, Association for Public Economic Theory, vol. 9(4), pages 563-587, August.
- Vicki Bier & Santiago Oliveros & Larry Samuelson, 2006. "Choosing What to Protect: Strategic Defensive Allocation against an Unknown Attacker," Levine's Bibliography 321307000000000158, UCLA Department of Economics.
- Chen Wang & Vicki M. Bier, 2011. "Target-Hardening Decisions Based on Uncertain Multiattribute Terrorist Utility," Decision Analysis, INFORMS, vol. 8(4), pages 286-302, December.
- Vicki M. Bier & Naraphorn Haphuriwat & Jaime Menoyo & Rae Zimmerman & Alison M. Culpen, 2008. "Optimal Resource Allocation for Defense of Targets Based on Differing Measures of Attractiveness," Risk Analysis, John Wiley & Sons, vol. 28(3), pages 763-770, June.
- Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
- Casey Rothschild & Laura McLay & Seth Guikema, 2012. "Adversarial Risk Analysis with Incomplete Information: A Level‐k Approach," Risk Analysis, John Wiley & Sons, vol. 32(7), pages 1219-1231, July.
- White, Chelsea C. & White, Douglas J., 1989. "Markov decision processes," European Journal of Operational Research, Elsevier, vol. 39(1), pages 1-16, March.
- Jun Zhuang & Vicki M. Bier, 2007. "Balancing Terrorism and Natural Disasters---Defensive Strategy with Endogenous Attacker Effort," Operations Research, INFORMS, vol. 55(5), pages 976-991, October.
- Laura McLay & Casey Rothschild & Seth Guikema, 2012. "Robust Adversarial Risk Analysis: A Level- k Approach," Decision Analysis, INFORMS, vol. 9(1), pages 41-54, March.
- K Deb, 2001. "Nonlinear goal programming using multi-objective genetic algorithms," Journal of the Operational Research Society, Palgrave Macmillan;The OR Society, vol. 52(3), pages 291-302, March.
- Richard D. Smallwood & Edward J. Sondik, 1973. "The Optimal Control of Partially Observable Markov Processes over a Finite Horizon," Operations Research, INFORMS, vol. 21(5), pages 1071-1088, October.
- Chelsea C. White & William T. Scherer, 1989. "Solution Procedures for Partially Observed Markov Decision Processes," Operations Research, INFORMS, vol. 37(5), pages 791-797, October.
- Niyazi Bakır, 2011. "A Stackelberg game model for resource allocation in cargo container security," Annals of Operations Research, Springer, vol. 187(1), pages 5-22, July.
- Daniel S. Bernstein & Robert Givan & Neil Immerman & Shlomo Zilberstein, 2002. "The Complexity of Decentralized Control of Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 27(4), pages 819-840, November.
- Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
- Konak, Abdullah & Coit, David W. & Smith, Alice E., 2006. "Multi-objective optimization using genetic algorithms: A tutorial," Reliability Engineering and System Safety, Elsevier, vol. 91(9), pages 992-1007.
- M. K. Ghosh & D. McDonald & S. Sinha, 2004. "Zero-Sum Stochastic Games with Partial Information," Journal of Optimization Theory and Applications, Springer, vol. 121(1), pages 99-118, April.
- Keeney,Ralph L. & Raiffa,Howard, 1993. "Decisions with Multiple Objectives," Cambridge Books, Cambridge University Press, number 9780521438834.
- Huseyin Cavusoglu & Young Kwark & Bin Mai & Srinivasan Raghunathan, 2013. "Passenger Profiling and Screening for Aviation Security in the Presence of Strategic Attackers," Decision Analysis, INFORMS, vol. 10(1), pages 63-81, March.
- George E. Monahan, 1982. "State of the Art---A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms," Management Science, INFORMS, vol. 28(1), pages 1-16, January.
- Edward J. Sondik, 1978. "The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs," Operations Research, INFORMS, vol. 26(2), pages 282-304, April.
- Kjell Hausken & Jun Zhuang, 2011. "Governments' and Terrorists' Defense and Attack in a T -Period Game," Decision Analysis, INFORMS, vol. 8(1), pages 46-70, March.
- K Hausken & J Zhuang, 2012. "The timing and deterrence of terrorist attacks due to exogenous dynamics," Journal of the Operational Research Society, Palgrave Macmillan;The OR Society, vol. 63(6), pages 726-735, June.
- James N. Eagle, 1984. "The Optimal Search for a Moving Target When the Search Path Is Constrained," Operations Research, INFORMS, vol. 32(5), pages 1107-1115, October.
- Hamid Mohtadi & Antu Panini Murshid, 2009. "Risk Analysis of Chemical, Biological, or Radionuclear Threats: Implications for Food Security," Risk Analysis, John Wiley & Sons, vol. 29(9), pages 1317-1335, September.
- James C. Bean, 1994. "Genetic Algorithms and Random Keys for Sequencing and Optimization," INFORMS Journal on Computing, INFORMS, vol. 6(2), pages 154-160, May.
- Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
- Chelsea C. White & William T. Scherer, 1994. "Finite-Memory Suboptimal Design for Partially Observed Markov Decision Processes," Operations Research, INFORMS, vol. 42(3), pages 439-455, June.
Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
Cited by:
- Julio B. Clempner, 2018. "Computing multiobjective Markov chains handled by the extraproximal method," Annals of Operations Research, Springer, vol. 271(2), pages 469-486, December.
- Satya S. Malladi & Alan L. Erera & Chelsea C. White, 2023. "Inventory control with modulated demand and a partially observed modulation process," Annals of Operations Research, Springer, vol. 321(1), pages 343-369, February.
- Denizalp Goktas & Jiayi Zhao & Amy Greenwald, 2022. "Zero-Sum Stochastic Stackelberg Games," Papers 2211.13847, arXiv.org.
Most related items
These are the items that most often cite the same works as this one and are cited by the same works as this one.- Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
- Hunt, Kyle & Zhuang, Jun, 2024. "A review of attacker-defender games: Current state and paths forward," European Journal of Operational Research, Elsevier, vol. 313(2), pages 401-417.
- Mohammad E. Nikoofal & Mehmet Gümüs, 2015. "On the value of terrorist’s private information in a government’s defensive resource allocation problem," IISE Transactions, Taylor & Francis Journals, vol. 47(6), pages 533-555, June.
- Xiaojun (Gene) Shan & Jun Zhuang, 2014. "Modeling Credible Retaliation Threats in Deterring the Smuggling of Nuclear Weapons Using Partial Inspection---A Three-Stage Game," Decision Analysis, INFORMS, vol. 11(1), pages 43-62, March.
- Zong-Zhi Lin & James C. Bean & Chelsea C. White, 2004. "A Hybrid Genetic/Optimization Algorithm for Finite-Horizon, Partially Observed Markov Decision Processes," INFORMS Journal on Computing, INFORMS, vol. 16(1), pages 27-38, February.
- Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
- Abhijit Gosavi, 2009. "Reinforcement Learning: A Tutorial Survey and Recent Advances," INFORMS Journal on Computing, INFORMS, vol. 21(2), pages 178-192, May.
- Zhiheng Xu & Jun Zhuang, 2019. "A Study on a Sequential One‐Defender‐N‐Attacker Game," Risk Analysis, John Wiley & Sons, vol. 39(6), pages 1414-1432, June.
- Peiqiu Guan & Jun Zhuang, 2016. "Modeling Resources Allocation in Attacker‐Defender Games with “Warm Up” CSF," Risk Analysis, John Wiley & Sons, vol. 36(4), pages 776-791, April.
- Shan, Xiaojun & Zhuang, Jun, 2018. "Modeling cumulative defensive resource allocation against a strategic attacker in a multi-period multi-target sequential game," Reliability Engineering and System Safety, Elsevier, vol. 179(C), pages 12-26.
- Vineet M. Payyappalli & Jun Zhuang & Victor Richmond R. Jose, 2017. "Deterrence and Risk Preferences in Sequential Attacker–Defender Games with Continuous Efforts," Risk Analysis, John Wiley & Sons, vol. 37(11), pages 2229-2245, November.
- Xing Gao & Weijun Zhong & Shue Mei, 2013. "Information Security Investment When Hackers Disseminate Knowledge," Decision Analysis, INFORMS, vol. 10(4), pages 352-368, December.
- James T. Treharne & Charles R. Sox, 2002. "Adaptive Inventory Control for Nonstationary Demand and Partial Information," Management Science, INFORMS, vol. 48(5), pages 607-624, May.
- Szidarovszky, Ferenc & Luo, Yi, 2014. "Incorporating risk seeking attitude into defense strategy," Reliability Engineering and System Safety, Elsevier, vol. 123(C), pages 104-109.
- Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
- Hunt, Kyle & Agarwal, Puneet & Zhuang, Jun, 2022. "On the adoption of new technology to enhance counterterrorism measures: An attacker–defender game with risk preferences," Reliability Engineering and System Safety, Elsevier, vol. 218(PB).
- Serin, Yasemin, 1995. "A nonlinear programming model for partially observable Markov decision processes: Finite horizon case," European Journal of Operational Research, Elsevier, vol. 86(3), pages 549-564, November.
- Simon, Jay & Omar, Ayman, 2020. "Cybersecurity investments in the supply chain: Coordination and a strategic attacker," European Journal of Operational Research, Elsevier, vol. 282(1), pages 161-171.
- Yossi Aviv & Amit Pazgal, 2005. "A Partially Observed Markov Decision Process for Dynamic Pricing," Management Science, INFORMS, vol. 51(9), pages 1400-1416, September.
- Wei Wang & Francesco Di Maio & Enrico Zio, 2019. "Adversarial Risk Analysis to Allocate Optimal Defense Resources for Protecting Cyber–Physical Systems from Cyber Attacks," Risk Analysis, John Wiley & Sons, vol. 39(12), pages 2766-2785, December.
More about this item
Keywords
Dynamic programming; Artificial intelligence; Sequential decision making;All these keywords.
Statistics
Access and download statisticsCorrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:annopr:v:235:y:2015:i:1:p:103-128:10.1007/s10479-015-1935-0. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through the various RePEc services.