IDEAS home Printed from https://ideas.repec.org/a/eee/phsmap/v536y2019ics0378437119314591.html
   My bibliography  Save this article

Q-learning boosts the evolution of cooperation in structured population by involving extortion

Author

Listed:
  • Ding, Hong
  • Zhang, Geng-shun
  • Wang, Shi-hao
  • Li, Juan
  • Wang, Zhen

Abstract

Extortion strategies can guarantee that one player’s own surplus exceeds the co-player’s surplus by a fixed percentage. Although extortion is unstable in the well-mixed population, recent studies have found that extortion can act as a catalyst to promote cooperation in the spatial prisoner’s dilemma game, especially the strategy updating is ruled by replicator-like dynamics and innovation mechanisms, such as myopic best response or aspiration-driven dynamics. Q-learning is a typical self-reinforcement learning algorithm. Importantly, it cannot promote cooperation in the classic two-strategy prisoner’s dilemma game. Here, we explore the effect of Q-learning on cooperation by involving extortion. Results reveal Q-learning significantly boosts the evolution of cooperation, which is robust to population structures (regular lattice, small world network and scale-free network) and extortion strength. The reason for that is the extortioner provides cooperators a better opportunity to survive and cooperators act as catalysts to promote the coexistence of the three strategies. In particular, Q-learning is more significant in promoting cooperation than replicator-like dynamics and myopic best response. When the temptation to defect is not too large, Q-learning performs better than aspiration-driven dynamics, on the contrary, aspiration-driven dynamics performs better. This study reveals the important role of reinforcement learning in the evolution of cooperation.

Suggested Citation

  • Ding, Hong & Zhang, Geng-shun & Wang, Shi-hao & Li, Juan & Wang, Zhen, 2019. "Q-learning boosts the evolution of cooperation in structured population by involving extortion," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 536(C).
  • Handle: RePEc:eee:phsmap:v:536:y:2019:i:c:s0378437119314591
    DOI: 10.1016/j.physa.2019.122551
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0378437119314591
    Download Restriction: Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000

    File URL: https://libkey.io/10.1016/j.physa.2019.122551?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Peter Duersch & Jörg Oechssler & Burkhard Schipper, 2014. "When is tit-for-tat unbeatable?," International Journal of Game Theory, Springer;Game Theory Society, vol. 43(1), pages 25-36, February.
    2. repec:cla:levarc:786969000000001297 is not listed on IDEAS
    3. Fudenberg, Drew & Maskin, Eric, 1990. "Evolution and Cooperation in Noisy Repeated Games," American Economic Review, American Economic Association, vol. 80(2), pages 274-279, May.
    4. Xu, Bo & Lan, Yini, 2016. "The distribution of wealth and the effect of extortion in structured populations," Chaos, Solitons & Fractals, Elsevier, vol. 87(C), pages 276-280.
    5. Graham Kendall & Xin Yao & Siang Yew Chong, 2007. "The Iterated Prisoners' Dilemma:20 Years On," World Scientific Books, World Scientific Publishing Co. Pte. Ltd., number 6461, August.
    6. Christian Hilbe & Kristin Hagel & Manfred Milinski, 2016. "Asymmetric Power Boosts Extortion in an Economic Experiment," PLOS ONE, Public Library of Science, vol. 11(10), pages 1-14, October.
    7. Wang, JunFang & Guo, JinLi, 2019. "A synergy of punishment and extortion in cooperation dilemmas driven by the leader," Chaos, Solitons & Fractals, Elsevier, vol. 119(C), pages 263-268.
    8. Zhijian Wang & Yanran Zhou & Jaimie W. Lien & Jie Zheng & Bin Xu, 2016. "Extortion Can Outperform Generosity in the Iterated Prisoners' Dilemma," Levine's Bibliography 786969000000001297, UCLA Department of Economics.
    9. Zhi-Hai Rong & Qian Zhao & Zhi-Xi Wu & Tao Zhou & Chi Kong Tse, 2016. "Proper aspiration level promotes generous behavior in the spatial prisoner’s dilemma game," The European Physical Journal B: Condensed Matter and Complex Systems, Springer;EDP Sciences, vol. 89(7), pages 1-7, July.
    10. Siang Yew Chong & Jan Humble & Graham Kendall & Jiawei Li & Xin Yao, 2007. "The Iterated Prisoner's Dilemma: 20 Years On," World Scientific Book Chapters, in: The Iterated Prisoners' Dilemma 20 Years On, chapter 1, pages 1-21, World Scientific Publishing Co. Pte. Ltd..
    11. Xu, Bo & Yue, Yunpeng, 2016. "The emergence of cooperation in tie strength models," Chaos, Solitons & Fractals, Elsevier, vol. 91(C), pages 585-590.
    12. Per Molander, 1985. "The Optimal Level of Generosity in a Selfish, Uncertain Environment," Journal of Conflict Resolution, Peace Science Society (International), vol. 29(4), pages 611-618, December.
    13. McAvoy, Alex & Hauert, Christoph, 2017. "Autocratic strategies for alternating games," Theoretical Population Biology, Elsevier, vol. 113(C), pages 13-22.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Mao, Yajun & Rong, Zhihai & Wu, Zhi-Xi, 2021. "Effect of collective influence on the evolution of cooperation in evolutionary prisoner’s dilemma games," Applied Mathematics and Computation, Elsevier, vol. 392(C).
    2. Gao, Liyan & Pan, Qiuhui & He, Mingfeng, 2021. "Environmental-based defensive promotes cooperation in the prisoner’s dilemma game," Applied Mathematics and Computation, Elsevier, vol. 401(C).
    3. Yang, Zhengzhi & Zheng, Lei & Perc, Matjaž & Li, Yumeng, 2024. "Interaction state Q-learning promotes cooperation in the spatial prisoner's dilemma game," Applied Mathematics and Computation, Elsevier, vol. 463(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kang, Kai & Tian, Jinyan & Zhang, Boyu, 2024. "Cooperation and control in asymmetric repeated games," Applied Mathematics and Computation, Elsevier, vol. 470(C).
    2. Benjamin M Zagorsky & Johannes G Reiter & Krishnendu Chatterjee & Martin A Nowak, 2013. "Forgiver Triumphs in Alternating Prisoner's Dilemma," PLOS ONE, Public Library of Science, vol. 8(12), pages 1-8, December.
    3. Wang, Junfang & Shen, Aizhong, 2024. "The synergy of elimination and zero-determinant strategy on dynamic games," Chaos, Solitons & Fractals, Elsevier, vol. 182(C).
    4. Burkhard C. Schipper, 2022. "Strategic Teaching and Learning in Games," American Economic Journal: Microeconomics, American Economic Association, vol. 14(3), pages 321-352, August.
    5. Masahiko Ueda, 2022. "Controlling Conditional Expectations by Zero-Determinant Strategies," SN Operations Research Forum, Springer, vol. 3(3), pages 1-22, September.
    6. Taha, Mohammad A. & Ghoneim, Ayman, 2020. "Zero-determinant strategies in repeated asymmetric games," Applied Mathematics and Computation, Elsevier, vol. 369(C).
    7. John W. Straka & Brenda C. Straka, 2020. "Reframe policymaking dysfunction through bipartisan-inclusion leadership," Policy Sciences, Springer;Society of Policy Sciences, vol. 53(4), pages 779-802, December.
    8. Burkhard Schipper, 2015. "Strategic teaching and learning in games," Working Papers 151, University of California, Davis, Department of Economics.
    9. Shahin Esmaeili, 2021. "Prisoner Dilemma in maximization constrained: the rationality of cooperation," Papers 2102.03644, arXiv.org, revised Sep 2021.
    10. Yanlong Zhang & Wolfram Elsner, 2020. "Social leverage, a core mechanism of cooperation. Locality, assortment, and network evolution," Journal of Evolutionary Economics, Springer, vol. 30(3), pages 867-889, July.
    11. Masahiko Ueda & Toshiyuki Tanaka, 2020. "Linear algebraic structure of zero-determinant strategies in repeated games," PLOS ONE, Public Library of Science, vol. 15(4), pages 1-13, April.
    12. Bhaskar V., 1996. "On the neutral stability of mixed strategies in asymmetric contests," Mathematical Social Sciences, Elsevier, vol. 31(1), pages 56-57, February.
    13. Matsushima Hitoshi, 2020. "Behavioral Theory of Repeated Prisoner’s Dilemma: Generous Tit-For-Tat Strategy," The B.E. Journal of Theoretical Economics, De Gruyter, vol. 20(1), pages 1-11, January.
    14. Ding, Shasha & Sun, Hao & Sun, Panfei & Han, Weibin, 2022. "Dynamic outcome of coopetition duopoly with implicit collusion," Chaos, Solitons & Fractals, Elsevier, vol. 160(C).
    15. Bhaskar, V., 1993. "Neutral Stability in Assymetric Evolutionary Games," Papers 9358, Tilburg - Center for Economic Research.
    16. John T. Scholz & Cheng‐Lung Wang, 2009. "Learning to Cooperate: Learning Networks and the Problem of Altruism," American Journal of Political Science, John Wiley & Sons, vol. 53(3), pages 572-587, July.
    17. Shota Fujishima, 2015. "The emergence of cooperation through leadership," International Journal of Game Theory, Springer;Game Theory Society, vol. 44(1), pages 17-36, February.
    18. Cason, Timothy N. & Mui, Vai-Lam, 2019. "Individual versus group choices of repeated game strategies: A strategy method approach," Games and Economic Behavior, Elsevier, vol. 114(C), pages 128-145.
    19. Chen, Wei & Wang, Jianwei & Yu, Fengyuan & He, Jialu & Xu, Wenshu & Dai, Wenhui, 2024. "Successful initial positioning of non-cooperative individuals in cooperative populations effectively hinders cooperation prosperity," Applied Mathematics and Computation, Elsevier, vol. 462(C).
    20. Sobel, Joel, 2000. "Economists' Models of Learning," Journal of Economic Theory, Elsevier, vol. 94(2), pages 241-261, October.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:phsmap:v:536:y:2019:i:c:s0378437119314591. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.journals.elsevier.com/physica-a-statistical-mechpplications/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.