IDEAS home Printed from https://ideas.repec.org/a/spr/jogath/v45y2016i4d10.1007_s00182-015-0499-1.html
   My bibliography  Save this article

Generalized reinforcement learning in perfect-information games

Author

Listed:
  • Maxwell Pak

    (Southwestern University of Finance and Economics)

  • Bing Xu

    (Southwestern University of Finance and Economics)

Abstract

This paper studies reinforcement learning in which players base their action choice on valuations they have for the actions. We identify two general conditions on valuation updating rules that together guarantee that the probability of playing a subgame perfect Nash equilibrium (SPNE) converges to one in games where no player is indifferent between two outcomes without every other player being also indifferent. The same conditions guarantee that the fraction of times a SPNE is played converges to one almost surely. We also show that for additively separable valuations, in which valuations are the sum of empirical and error terms, the conditions guaranteeing convergence can be made more intuitive. In addition, we give four examples of valuations that satisfy our conditions. These examples represent different degrees of sophistication in learning behavior and include well-known examples of reinforcement learning.

Suggested Citation

  • Maxwell Pak & Bing Xu, 2016. "Generalized reinforcement learning in perfect-information games," International Journal of Game Theory, Springer;Game Theory Society, vol. 45(4), pages 985-1011, November.
  • Handle: RePEc:spr:jogath:v:45:y:2016:i:4:d:10.1007_s00182-015-0499-1
    DOI: 10.1007/s00182-015-0499-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s00182-015-0499-1
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s00182-015-0499-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Ellison, Glenn, 1993. "Learning, Local Interaction, and Coordination," Econometrica, Econometric Society, vol. 61(5), pages 1047-1071, September.
    2. Marx, Leslie M. & Swinkels, Jeroen M., 2000. "Order Independence for Iterated Weak Dominance," Games and Economic Behavior, Elsevier, vol. 31(2), pages 324-329, May.
    3. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    4. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    5. Laslier, Jean-Francois & Topol, Richard & Walliser, Bernard, 2001. "A Behavioral Learning Process in Games," Games and Economic Behavior, Elsevier, vol. 37(2), pages 340-366, November.
    6. Jean-François Laslier & Bernard Walliser, 2005. "A reinforcement learning process in extensive form games," International Journal of Game Theory, Springer;Game Theory Society, vol. 33(2), pages 219-227, June.
    7. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    8. Martin J. Osborne & Ariel Rubinstein, 1994. "A Course in Game Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262650401, April.
    9. Sarin, Rajiv & Vahid, Farshid, 1999. "Payoff Assessments without Probabilities: A Simple Dynamic Model of Choice," Games and Economic Behavior, Elsevier, vol. 28(2), pages 294-309, August.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. McKinney, C. Nicholas & Van Huyck, John B., 2021. "Does Playing Against An Error Prone Opponent Influence Learning in Nim?," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 95(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Funai, Naoki, 2022. "Reinforcement learning with foregone payoff information in normal form games," Journal of Economic Behavior & Organization, Elsevier, vol. 200(C), pages 638-660.
    2. Alanyali, Murat, 2010. "A note on adjusted replicator dynamics in iterated games," Journal of Mathematical Economics, Elsevier, vol. 46(1), pages 86-98, January.
    3. Hopkins, Ed & Posch, Martin, 2005. "Attainability of boundary points under reinforcement learning," Games and Economic Behavior, Elsevier, vol. 53(1), pages 110-125, October.
    4. Izquierdo, Luis R. & Izquierdo, Segismundo S. & Gotts, Nicholas M. & Polhill, J. Gary, 2007. "Transient and asymptotic dynamics of reinforcement learning in games," Games and Economic Behavior, Elsevier, vol. 61(2), pages 259-276, November.
    5. Naoki Funai, 2019. "Convergence results on stochastic adaptive learning," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 68(4), pages 907-934, November.
    6. Oyarzun, Carlos & Sarin, Rajiv, 2013. "Learning and risk aversion," Journal of Economic Theory, Elsevier, vol. 148(1), pages 196-225.
    7. Jacques Durieu & Philippe Solal, 2012. "Models of Adaptive Learning in Game Theory," Chapters, in: Richard Arena & Agnès Festré & Nathalie Lazaric (ed.), Handbook of Knowledge and Economics, chapter 11, Edward Elgar Publishing.
    8. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, vol. 9(2), pages 1-67, May.
    9. Mengel, Friederike, 2012. "Learning across games," Games and Economic Behavior, Elsevier, vol. 74(2), pages 601-619.
    10. Schuster, Stephan, 2012. "Applications in Agent-Based Computational Economics," MPRA Paper 47201, University Library of Munich, Germany.
    11. Schuster, Stephan, 2010. "Network Formation with Adaptive Agents," MPRA Paper 27388, University Library of Munich, Germany.
    12. Ianni, Antonella, 2014. "Learning strict Nash equilibria through reinforcement," Journal of Mathematical Economics, Elsevier, vol. 50(C), pages 148-155.
    13. Walter Gutjahr, 2006. "Interaction dynamics of two reinforcement learners," Central European Journal of Operations Research, Springer;Slovak Society for Operations Research;Hungarian Operational Research Society;Czech Society for Operations Research;Österr. Gesellschaft für Operations Research (ÖGOR);Slovenian Society Informatika - Section for Operational Research;Croatian Operational Research Society, vol. 14(1), pages 59-86, February.
    14. Beggs, A.W., 2005. "On the convergence of reinforcement learning," Journal of Economic Theory, Elsevier, vol. 122(1), pages 1-36, May.
    15. Bernergård, Axel & Mohlin, Erik, 2019. "Evolutionary selection against iteratively weakly dominated strategies," Games and Economic Behavior, Elsevier, vol. 117(C), pages 82-97.
    16. Mario Bravo & Mathieu Faure, 2013. "Reinforcement Learning with Restrictions on the Action Set," AMSE Working Papers 1335, Aix-Marseille School of Economics, France, revised 01 Jul 2013.
    17. Ioannou, Christos A. & Romero, Julian, 2014. "A generalized approach to belief learning in repeated games," Games and Economic Behavior, Elsevier, vol. 87(C), pages 178-203.
    18. Pangallo, Marco & Sanders, James B.T. & Galla, Tobias & Farmer, J. Doyne, 2022. "Towards a taxonomy of learning dynamics in 2 × 2 games," Games and Economic Behavior, Elsevier, vol. 132(C), pages 1-21.
    19. Naoki Funai, 2013. "An Adaptive Learning Model in Coordination Games," Games, MDPI, vol. 4(4), pages 1-22, November.
    20. Mario Bravo, 2016. "An Adjusted Payoff-Based Procedure for Normal Form Games," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1469-1483, November.

    More about this item

    Keywords

    Reinforcement learning; Extensive-form games;

    JEL classification:

    • D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jogath:v:45:y:2016:i:4:d:10.1007_s00182-015-0499-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.