IDEAS home Printed from https://ideas.repec.org/a/eee/apmaco/v409y2021ics0096300321004598.html
   My bibliography  Save this article

Symmetric equilibrium of multi-agent reinforcement learning in repeated prisoner’s dilemma

Author

Listed:
  • Usui, Yuki
  • Ueda, Masahiko

Abstract

We investigate the repeated prisoner’s dilemma game where both players alternately use reinforcement learning to obtain their optimal memory-one strategies. We theoretically solve the simultaneous Bellman optimality equations of reinforcement learning. We find that the Win-stay Lose-shift strategy, the Grim strategy, and the strategy which always defects can form symmetric equilibrium of the mutual reinforcement learning process amongst all deterministic memory-one strategies.

Suggested Citation

  • Usui, Yuki & Ueda, Masahiko, 2021. "Symmetric equilibrium of multi-agent reinforcement learning in repeated prisoner’s dilemma," Applied Mathematics and Computation, Elsevier, vol. 409(C).
  • Handle: RePEc:eee:apmaco:v:409:y:2021:i:c:s0096300321004598
    DOI: 10.1016/j.amc.2021.126370
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0096300321004598
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.amc.2021.126370?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Christian Hilbe & Krishnendu Chatterjee & Martin A. Nowak, 2018. "Partners and rivals in direct reciprocity," Nature Human Behaviour, Nature, vol. 2(7), pages 469-477, July.
    2. Imhof, Lorens & Nowak, Martin & Fudenberg, Drew, 2007. "Tit-for-Tat or Win-Stay, Lose-Shift?," Scholarly Articles 3200671, Harvard University Department of Economics.
    3. Marc Harper & Vincent Knight & Martin Jones & Georgios Koutsovoulos & Nikoleta E Glynatsi & Owen Campbell, 2017. "Reinforcement learning produces dominant strategies for the Iterated Prisoner’s Dilemma," PLOS ONE, Public Library of Science, vol. 12(12), pages 1-33, December.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Masahiko Ueda, 2022. "Controlling Conditional Expectations by Zero-Determinant Strategies," SN Operations Research Forum, Springer, vol. 3(3), pages 1-22, September.
    2. Ueda, Masahiko, 2023. "Memory-two strategies forming symmetric mutual reinforcement learning equilibrium in repeated prisoners’ dilemma game," Applied Mathematics and Computation, Elsevier, vol. 444(C).
    3. Wang, Xianjia & Yang, Zhipeng & Liu, Yanli & Chen, Guici, 2023. "A reinforcement learning-based strategy updating model for the cooperative evolution," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 618(C).
    4. Ding, Zhen-Wei & Zheng, Guo-Zhong & Cai, Chao-Ran & Cai, Wei-Ran & Chen, Li & Zhang, Ji-Qiang & Wang, Xu-Ming, 2023. "Emergence of cooperation in two-agent repeated games with reinforcement learning," Chaos, Solitons & Fractals, Elsevier, vol. 175(P1).
    5. Yuan, Hairui & Meng, Xinzhu, 2022. "Replicator dynamics of the Hawk-Dove game with different stochastic noises in infinite populations," Applied Mathematics and Computation, Elsevier, vol. 430(C).
    6. Wolfram Barfuss & Janusz Meylahn, 2022. "Intrinsic fluctuations of reinforcement learning promote cooperation," Papers 2209.01013, arXiv.org, revised Feb 2023.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ding, Zhen-Wei & Zheng, Guo-Zhong & Cai, Chao-Ran & Cai, Wei-Ran & Chen, Li & Zhang, Ji-Qiang & Wang, Xu-Ming, 2023. "Emergence of cooperation in two-agent repeated games with reinforcement learning," Chaos, Solitons & Fractals, Elsevier, vol. 175(P1).
    2. Molnar, Grant & Hammond, Caroline & Fu, Feng, 2023. "Reactive means in the iterated Prisoner’s dilemma," Applied Mathematics and Computation, Elsevier, vol. 458(C).
    3. Yohsuke Murase & Seung Ki Baek, 2021. "Friendly-rivalry solution to the iterated n-person public-goods game," PLOS Computational Biology, Public Library of Science, vol. 17(1), pages 1-17, January.
    4. Evans, Alecia & Sesmero, Juan, 2022. "Cooperation in Social Dilemmas with Correlated Noisy Payoffs: Theory and Experimental Evidence," 2021 Annual Meeting, August 1-3, Austin, Texas 322804, Agricultural and Applied Economics Association.
    5. Masahiko Ueda & Toshiyuki Tanaka, 2020. "Linear algebraic structure of zero-determinant strategies in repeated games," PLOS ONE, Public Library of Science, vol. 15(4), pages 1-13, April.
    6. Liu, Fanglin & Wu, Bin, 2022. "Environmental quality and population welfare in Markovian eco-evolutionary dynamics," Applied Mathematics and Computation, Elsevier, vol. 431(C).
    7. Christopher Lee & Marc Harper & Dashiell Fryer, 2015. "The Art of War: Beyond Memory-one Strategies in Population Games," PLOS ONE, Public Library of Science, vol. 10(3), pages 1-16, March.
    8. Laura Schmid & Farbod Ekbatani & Christian Hilbe & Krishnendu Chatterjee, 2023. "Quantitative assessment can stabilize indirect reciprocity under imperfect information," Nature Communications, Nature, vol. 14(1), pages 1-14, December.
    9. Werner, Tobias, 2021. "Algorithmic and human collusion," DICE Discussion Papers 372, Heinrich Heine University Düsseldorf, Düsseldorf Institute for Competition Economics (DICE).
    10. Quan, Ji & Chen, Xinyue & Wang, Xianjia, 2024. "Repeated prisoner's dilemma games in multi-player structured populations with crosstalk," Applied Mathematics and Computation, Elsevier, vol. 473(C).
    11. Drew Fudenberg & David G. Rand & Anna Dreber, 2012. "Slow to Anger and Fast to Forgive: Cooperation in an Uncertain World," American Economic Review, American Economic Association, vol. 102(2), pages 720-749, April.
    12. Liu, Jinzhuo & Meng, Haoran & Wang, Wei & Xie, Zhongwen & Yu, Qian, 2019. "Evolution of cooperation on independent networks: The influence of asymmetric information sharing updating mechanism," Applied Mathematics and Computation, Elsevier, vol. 340(C), pages 234-241.
    13. Evans, Alecia & Sesmero, Juan Pablo, 2022. "Noisy Payoffs in an Infinitely Repeated Prisoner’s Dilemma – Experimental Evidence," 2022 Annual Meeting, July 31-August 2, Anaheim, California 322434, Agricultural and Applied Economics Association.
    14. Chang, Shuhua & Zhang, Zhipeng & Wu, Yu’e & Xie, Yunya, 2018. "Cooperation is enhanced by inhomogeneous inertia in spatial prisoner’s dilemma game," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 490(C), pages 419-425.
    15. Kurokawa, Shun, 2019. "How memory cost, switching cost, and payoff non-linearity affect the evolution of persistence," Applied Mathematics and Computation, Elsevier, vol. 341(C), pages 174-192.
    16. Yongkui Liu & Xiaojie Chen & Lin Zhang & Long Wang & Matjaž Perc, 2012. "Win-Stay-Lose-Learn Promotes Cooperation in the Spatial Prisoner's Dilemma Game," PLOS ONE, Public Library of Science, vol. 7(2), pages 1-8, February.
    17. Sean Duffy & J. J. Naddeo & David Owens & John Smith, 2024. "Cognitive Load and Mixed Strategies: On Brains and Minimax," International Game Theory Review (IGTR), World Scientific Publishing Co. Pte. Ltd., vol. 26(03), pages 1-34, September.
    18. Kang, Kai & Tian, Jinyan & Zhang, Boyu, 2024. "Cooperation and control in asymmetric repeated games," Applied Mathematics and Computation, Elsevier, vol. 470(C).
    19. Wang, Xu-Wen & Nie, Sen & Jiang, Luo-Luo & Wang, Bing-Hong & Chen, Shi-Ming, 2017. "Role of delay-based reward in the spatial cooperation," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 465(C), pages 153-158.
    20. Yves Breitmoser, 2015. "Cooperation, but No Reciprocity: Individual Strategies in the Repeated Prisoner's Dilemma," American Economic Review, American Economic Association, vol. 105(9), pages 2882-2910, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:apmaco:v:409:y:2021:i:c:s0096300321004598. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/applied-mathematics-and-computation .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.