IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0102434.html
   My bibliography  Save this article

Efficient Use of Information in Adaptive Management with an Application to Managing Recreation near Golden Eagle Nesting Sites

Author

Listed:
  • Paul L Fackler
  • Krishna Pacifici
  • Julien Martin
  • Carol McIntyre

Abstract

It is generally the case that a significant degree of uncertainty exists concerning the behavior of ecological systems. Adaptive management has been developed to address such structural uncertainty, while recognizing that decisions must be made without full knowledge of how a system behaves. This paradigm attempts to use new information that develops during the course of management to learn how the system works. To date, however, adaptive management has used a very limited information set to characterize the learning that is possible. This paper uses an extension of the Partial Observable Markov Decision Process (POMDP) framework to expand the information set used to update belief in competing models. This feature can potentially increase the speed of learning through adaptive management, and lead to better management in the future. We apply this framework to a case study wherein interest lies in managing recreational restrictions around golden eagle (Aquila chrysaetos) nesting sites. The ultimate management objective is to maintain an abundant eagle population in Denali National Park while minimizing the regulatory burden on park visitors. In order to capture this objective, we developed a utility function that trades off expected breeding success with hiker access. Our work is relevant to the management of human activities in protected areas, but more generally demonstrates some of the benefits of POMDP in the context of adaptive management.

Suggested Citation

  • Paul L Fackler & Krishna Pacifici & Julien Martin & Carol McIntyre, 2014. "Efficient Use of Information in Adaptive Management with an Application to Managing Recreation near Golden Eagle Nesting Sites," PLOS ONE, Public Library of Science, vol. 9(8), pages 1-14, August.
  • Handle: RePEc:plo:pone00:0102434
    DOI: 10.1371/journal.pone.0102434
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0102434
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0102434&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0102434?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. George E. Monahan, 1982. "State of the Art---A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms," Management Science, INFORMS, vol. 28(1), pages 1-16, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Baggio, Michele & Fackler, Paul L., 2016. "Optimal management with reversible regime shifts," Journal of Economic Behavior & Organization, Elsevier, vol. 132(PB), pages 124-136.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Wooseung Jang & J. George Shanthikumar, 2002. "Stochastic allocation of inspection capacity to competitive processes," Naval Research Logistics (NRL), John Wiley & Sons, vol. 49(1), pages 78-94, February.
    2. Dinah Rosenberg & Eilon Solan & Nicolas Vieille, 2009. "Protocols with No Acknowledgment," Operations Research, INFORMS, vol. 57(4), pages 905-915, August.
    3. Yanling Chang & Alan Erera & Chelsea White, 2015. "Value of information for a leader–follower partially observed Markov game," Annals of Operations Research, Springer, vol. 235(1), pages 129-153, December.
    4. Churlzu Lim & J. Neil Bearden & J. Cole Smith, 2006. "Sequential Search with Multiattribute Options," Decision Analysis, INFORMS, vol. 3(1), pages 3-15, March.
    5. Chiel van Oosterom & Lisa M. Maillart & Jeffrey P. Kharoufeh, 2017. "Optimal maintenance policies for a safety‐critical system and its deteriorating sensor," Naval Research Logistics (NRL), John Wiley & Sons, vol. 64(5), pages 399-417, August.
    6. Malek Ebadi & Raha Akhavan-Tabatabaei, 2021. "Personalized Cotesting Policies for Cervical Cancer Screening: A POMDP Approach," Mathematics, MDPI, vol. 9(6), pages 1-20, March.
    7. Tianhu Deng & Zuo-Jun Max Shen & J. George Shanthikumar, 2014. "Statistical Learning of Service-Dependent Demand in a Multiperiod Newsvendor Setting," Operations Research, INFORMS, vol. 62(5), pages 1064-1076, October.
    8. İ. Esra Büyüktahtakın & Robert G. Haight, 2018. "A review of operations research models in invasive species management: state of the art, challenges, and future directions," Annals of Operations Research, Springer, vol. 271(2), pages 357-403, December.
    9. Hao Zhang, 2010. "Partially Observable Markov Decision Processes: A Geometric Technique and Analysis," Operations Research, INFORMS, vol. 58(1), pages 214-228, February.
    10. Chernonog, Tatyana & Avinadav, Tal, 2016. "A two-state partially observable Markov decision process with three actionsAuthor-Name: Ben-Zvi, Tal," European Journal of Operational Research, Elsevier, vol. 254(3), pages 957-967.
    11. Saghafian, Soroush, 2018. "Ambiguous partially observable Markov decision processes: Structural results and applications," Journal of Economic Theory, Elsevier, vol. 178(C), pages 1-35.
    12. Kobayashi, Teruyoshi, 2009. "Announcements and the effectiveness of monetary policy: A view from the US prime rate," Journal of Banking & Finance, Elsevier, vol. 33(12), pages 2253-2266, December.
    13. Turgay Ayer & Oguzhan Alagoz & Natasha K. Stout & Elizabeth S. Burnside, 2016. "Heterogeneity in Women’s Adherence and Its Role in Optimal Breast Cancer Screening Policies," Management Science, INFORMS, vol. 62(5), pages 1339-1362, May.
    14. Otten, Maarten & Timmer, Judith & Witteveen, Annemieke, 2020. "Stratified breast cancer follow-up using a continuous state partially observable Markov decision process," European Journal of Operational Research, Elsevier, vol. 281(2), pages 464-474.
    15. Arifoglu, Kenan & Özekici, Süleyman, 2011. "Inventory management with random supply and imperfect information: A hidden Markov model," International Journal of Production Economics, Elsevier, vol. 134(1), pages 123-137, November.
    16. Özgür-Ünlüakın, Demet & Bilgiç, Taner, 2017. "Performance analysis of an aggregation and disaggregation solution procedure to obtain a maintenance plan for a partially observable multi-component system," Reliability Engineering and System Safety, Elsevier, vol. 167(C), pages 652-662.
    17. Carolina Saavedra Sueldo & Ivo Perez Colo & Mariano De Paula & Sebastián A. Villar & Gerardo G. Acosta, 2023. "ROS-based architecture for fast digital twin development of smart manufacturing robotized systems," Annals of Operations Research, Springer, vol. 322(1), pages 75-99, March.
    18. Papakonstantinou, K.G. & Shinozuka, M., 2014. "Planning structural inspection and maintenance policies via dynamic programming and Markov processes. Part I: Theory," Reliability Engineering and System Safety, Elsevier, vol. 130(C), pages 202-213.
    19. Arifoglu, Kenan & Özekici, Süleyman, 2010. "Optimal policies for inventory systems with finite capacity and partially observed Markov-modulated demand and supply processes," European Journal of Operational Research, Elsevier, vol. 204(3), pages 421-438, August.
    20. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0102434. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.