IDEAS home Printed from https://ideas.repec.org/a/eee/apmaco/v458y2023ics0096300323003703.html
   My bibliography  Save this article

Reactive means in the iterated Prisoner’s dilemma

Author

Listed:
  • Molnar, Grant
  • Hammond, Caroline
  • Fu, Feng

Abstract

The Iterated Prisoner’s Dilemma (IPD) is a well studied framework for understanding direct reciprocity and cooperation in pairwise encounters. However, measuring the morality of various IPD strategies is still largely lacking. Here, we partially address this issue by proposing a suit of plausible morality metrics to quantify four aspects of justice. We focus our closed-form calculation on the class of reactive strategies because of their mathematical tractability and expressive power. We define reactive means as a tool for studying how actors in the IPD and Iterated Snowdrift Game (ISG) behave under typical circumstances. We compute reactive means for four functions intended to capture human intuitions about “goodness” and “fair play”. Two of these functions are strongly anticorrelated with success in the IPD and ISG, and the other two are weakly anticorrelated with success. Our results will aid in evaluating and comparing powerful IPD strategies based on machine learning algorithms, using simple and intuitive morality metrics.

Suggested Citation

  • Molnar, Grant & Hammond, Caroline & Fu, Feng, 2023. "Reactive means in the iterated Prisoner’s dilemma," Applied Mathematics and Computation, Elsevier, vol. 458(C).
  • Handle: RePEc:eee:apmaco:v:458:y:2023:i:c:s0096300323003703
    DOI: 10.1016/j.amc.2023.128201
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0096300323003703
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.amc.2023.128201?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Christian Hilbe & Krishnendu Chatterjee & Martin A. Nowak, 2018. "Publisher Correction: Partners and rivals in direct reciprocity," Nature Human Behaviour, Nature, vol. 2(7), pages 523-523, July.
    2. Christian Hilbe & Krishnendu Chatterjee & Martin A. Nowak, 2018. "Partners and rivals in direct reciprocity," Nature Human Behaviour, Nature, vol. 2(7), pages 469-477, July.
    3. Takahiro Ezaki & Yutaka Horita & Masanori Takezawa & Naoki Masuda, 2016. "Reinforcement Learning Explains Conditional Cooperation and Its Moody Cousin," PLOS Computational Biology, Public Library of Science, vol. 12(7), pages 1-13, July.
    4. Wang, Shengxian & Chen, Xiaojie & Xiao, Zhilong & Szolnoki, Attila, 2022. "Decentralized incentives for general well-being in networked public goods game," Applied Mathematics and Computation, Elsevier, vol. 431(C).
    5. Christoph Hauert & Michael Doebeli, 2004. "Spatial structure often inhibits the evolution of cooperation in the snowdrift game," Nature, Nature, vol. 428(6983), pages 643-646, April.
    6. Ethan Akin, 2015. "What You Gotta Know to Play Good in the Iterated Prisoner’s Dilemma," Games, MDPI, vol. 6(3), pages 1-16, June.
    7. İzgi, Burhaneddin & Özkaya, Murat & Üre, Nazım Kemal & Perc, Matjaž, 2023. "Extended matrix norm method: Applications to bimatrix games and convergence results," Applied Mathematics and Computation, Elsevier, vol. 438(C).
    8. Jelena Grujić & Constanza Fosco & Lourdes Araujo & José A Cuesta & Angel Sánchez, 2010. "Social Experiments in the Mesoscale: Humans Playing a Spatial Prisoner's Dilemma," PLOS ONE, Public Library of Science, vol. 5(11), pages 1-9, November.
    9. Marc Harper & Vincent Knight & Martin Jones & Georgios Koutsovoulos & Nikoleta E Glynatsi & Owen Campbell, 2017. "Reinforcement learning produces dominant strategies for the Iterated Prisoner’s Dilemma," PLOS ONE, Public Library of Science, vol. 12(12), pages 1-33, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).
    2. Kang, Kai & Tian, Jinyan & Zhang, Boyu, 2024. "Cooperation and control in asymmetric repeated games," Applied Mathematics and Computation, Elsevier, vol. 470(C).
    3. Ma, Yin-Jie & Jiang, Zhi-Qiang & Podobnik, Boris, 2022. "Predictability of players’ actions as a mechanism to boost cooperation," Chaos, Solitons & Fractals, Elsevier, vol. 164(C).
    4. Ding, Zhen-Wei & Zheng, Guo-Zhong & Cai, Chao-Ran & Cai, Wei-Ran & Chen, Li & Zhang, Ji-Qiang & Wang, Xu-Ming, 2023. "Emergence of cooperation in two-agent repeated games with reinforcement learning," Chaos, Solitons & Fractals, Elsevier, vol. 175(P1).
    5. Peter S. Park & Martin A. Nowak & Christian Hilbe, 2022. "Cooperation in alternating interactions with memory constraints," Nature Communications, Nature, vol. 13(1), pages 1-11, December.
    6. Jia, Danyang & Li, Tong & Zhao, Yang & Zhang, Xiaoqin & Wang, Zhen, 2022. "Empty nodes affect conditional cooperation under reinforcement learning," Applied Mathematics and Computation, Elsevier, vol. 413(C).
    7. Masahiko Ueda & Toshiyuki Tanaka, 2020. "Linear algebraic structure of zero-determinant strategies in repeated games," PLOS ONE, Public Library of Science, vol. 15(4), pages 1-13, April.
    8. Takahiro Ezaki & Naoki Masuda, 2017. "Reinforcement learning account of network reciprocity," PLOS ONE, Public Library of Science, vol. 12(12), pages 1-8, December.
    9. Laura Schmid & Farbod Ekbatani & Christian Hilbe & Krishnendu Chatterjee, 2023. "Quantitative assessment can stabilize indirect reciprocity under imperfect information," Nature Communications, Nature, vol. 14(1), pages 1-14, December.
    10. Quan, Ji & Chen, Xinyue & Wang, Xianjia, 2024. "Repeated prisoner's dilemma games in multi-player structured populations with crosstalk," Applied Mathematics and Computation, Elsevier, vol. 473(C).
    11. Yongkui Liu & Xiaojie Chen & Lin Zhang & Long Wang & Matjaž Perc, 2012. "Win-Stay-Lose-Learn Promotes Cooperation in the Spatial Prisoner's Dilemma Game," PLOS ONE, Public Library of Science, vol. 7(2), pages 1-8, February.
    12. Pi, Bin & Li, Yuhan & Feng, Minyu, 2022. "An evolutionary game with conformists and profiteers regarding the memory mechanism," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 597(C).
    13. Huang, Chaochao & Wang, Chaoqian, 2024. "Memory-based involution dilemma on square lattices," Chaos, Solitons & Fractals, Elsevier, vol. 178(C).
    14. Maria Kleshnina & Christian Hilbe & Štěpán Šimsa & Krishnendu Chatterjee & Martin A. Nowak, 2023. "The effect of environmental information on evolution of cooperation in stochastic games," Nature Communications, Nature, vol. 14(1), pages 1-11, December.
    15. Fabio Della Rossa & Fabio Dercole & Anna Di Meglio, 2020. "Direct Reciprocity and Model-Predictive Strategy Update Explain the Network Reciprocity Observed in Socioeconomic Networks," Games, MDPI, vol. 11(1), pages 1-28, March.
    16. Xiaofeng Wang, 2021. "Costly Participation and The Evolution of Cooperation in the Repeated Public Goods Game," Dynamic Games and Applications, Springer, vol. 11(1), pages 161-183, March.
    17. Hahnel, Ulf J.J. & Fell, Michael J., 2022. "Pricing decisions in peer-to-peer and prosumer-centred electricity markets: Experimental analysis in Germany and the United Kingdom," Renewable and Sustainable Energy Reviews, Elsevier, vol. 162(C).
    18. Song, Sha & Pan, Qiuhui & Zhu, Wenqiang & He, Mingfeng, 2023. "Evolution of cooperation in games with dual attribute strategy," Chaos, Solitons & Fractals, Elsevier, vol. 175(P1).
    19. Jorge Marco & Renan Goetz, 2024. "Public policy design and common property resources: A social network approach," American Journal of Agricultural Economics, John Wiley & Sons, vol. 106(1), pages 252-285, January.
    20. Usui, Yuki & Ueda, Masahiko, 2021. "Symmetric equilibrium of multi-agent reinforcement learning in repeated prisoner’s dilemma," Applied Mathematics and Computation, Elsevier, vol. 409(C).

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:apmaco:v:458:y:2023:i:c:s0096300323003703. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/applied-mathematics-and-computation .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.