IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1002691.html
   My bibliography  Save this article

Spike-based Decision Learning of Nash Equilibria in Two-Player Games

Author

Listed:
  • Johannes Friedrich
  • Walter Senn

Abstract

Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games. Author Summary: Socio-economic interactions are captured in a game theoretic framework by multiple agents acting on a pool of goods to maximize their own reward. Neuroeconomics tries to explain the agent's behavior in neuronal terms. Classical models in neuroeconomics use temporal-difference(TD)-learning. This algorithm incrementally updates values of state-action pairs, and actions are selected according to a value-based policy. In contrast, policy gradient methods do not introduce values as intermediate steps, but directly derive an action selection policy which maximizes the total expected reward. We consider a decision making network consisting of a population of neurons which, upon presentation of a spatio-temporal spike pattern, encodes binary actions by the population output spike trains and a subsequent majority vote. The action selection policy is parametrized by the strengths of synapses projecting to the population neurons. A gradient learning rule is derived which modifies these synaptic strengths and which depends on four factors, the pre- and postsynaptic activities, the action and the reward. We show that for classical game-theoretical tasks our decision making network endowed with the four-factor learning rule leads to Nash-optimal action selections. It also mimics human decision learning for these same tasks.

Suggested Citation

  • Johannes Friedrich & Walter Senn, 2012. "Spike-based Decision Learning of Nash Equilibria in Two-Player Games," PLOS Computational Biology, Public Library of Science, vol. 8(9), pages 1-12, September.
  • Handle: RePEc:plo:pcbi00:1002691
    DOI: 10.1371/journal.pcbi.1002691
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002691
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1002691&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1002691?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Fudenberg, Drew & Levine, David, 1998. "Learning in games," European Economic Review, Elsevier, vol. 42(3-5), pages 631-639, May.
    2. Richard D. Smallwood & Edward J. Sondik, 1973. "The Optimal Control of Partially Observable Markov Processes over a Finite Horizon," Operations Research, INFORMS, vol. 21(5), pages 1071-1088, October.
    3. Ben Seymour & John P. O'Doherty & Peter Dayan & Martin Koltzenburg & Anthony K. Jones & Raymond J. Dolan & Karl J. Friston & Richard S. Frackowiak, 2004. "Temporal difference models describe higher-order learning in humans," Nature, Nature, vol. 429(6992), pages 664-667, June.
    4. Drew Fudenberg & David K. Levine, 1998. "The Theory of Learning in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262061945, April.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hanan Shteingart & Yonatan Loewenstein, 2014. "Reinforcement Learning and Human Behavior," Discussion Paper Series dp656, The Federmann Center for the Study of Rationality, the Hebrew University, Jerusalem.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Galbiati, Marco & Soramäki, Kimmo, 2011. "An agent-based model of payment systems," Journal of Economic Dynamics and Control, Elsevier, vol. 35(6), pages 859-875, June.
    2. Schipper, Burkhard C., 2021. "Discovery and equilibrium in games with unawareness," Journal of Economic Theory, Elsevier, vol. 198(C).
    3. Mathieu Faure & Gregory Roth, 2010. "Stochastic Approximations of Set-Valued Dynamical Systems: Convergence with Positive Probability to an Attractor," Mathematics of Operations Research, INFORMS, vol. 35(3), pages 624-640, August.
    4. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    5. ,, 2011. "Manipulative auction design," Theoretical Economics, Econometric Society, vol. 6(2), May.
    6. Christian Ewerhart, 2020. "Ordinal potentials in smooth games," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 70(4), pages 1069-1100, November.
    7. Benaïm, Michel & Hofbauer, Josef & Hopkins, Ed, 2009. "Learning in games with unstable equilibria," Journal of Economic Theory, Elsevier, vol. 144(4), pages 1694-1709, July.
    8. Saori Iwanaga & Akira Namatame, 2015. "Hub Agents Determine Collective Behavior," New Mathematics and Natural Computation (NMNC), World Scientific Publishing Co. Pte. Ltd., vol. 11(02), pages 165-181.
    9. Erhao Xie, 2019. "Monetary Payoff and Utility Function in Adaptive Learning Models," Staff Working Papers 19-50, Bank of Canada.
    10. Jacob W. Crandall & Mayada Oudah & Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael A. Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Nature Communications, Nature, vol. 9(1), pages 1-12, December.
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," TSE Working Papers 17-806, Toulouse School of Economics (TSE).
      • Abdallah, Sherief & Bonnefon, Jean-François & Cebrian, Manuel & Crandall, Jacob W. & Ishowo-Oloko, Fatimah & Oudah, Mayada & Rahwan, Iyad & Shariff, Azim & Tennom,, 2017. "Cooperating with Machines," IAST Working Papers 17-68, Institute for Advanced Study in Toulouse (IAST).
      • Jacob Crandall & Mayada Oudah & Fatimah Ishowo-Oloko Tennom & Fatimah Ishowo-Oloko & Sherief Abdallah & Jean-François Bonnefon & Manuel Cebrian & Azim Shariff & Michael Goodrich & Iyad Rahwan, 2018. "Cooperating with machines," Post-Print hal-01897802, HAL.
    11. Dieter Balkenborg & Rosemarie Nagel, 2016. "An Experiment on Forward vs. Backward Induction: How Fairness and Level k Reasoning Matter," German Economic Review, Verein für Socialpolitik, vol. 17(3), pages 378-408, August.
    12. B Kelsey Jack, 2009. "Auctioning Conservation Contracts in Indonesia - Participant Learning in Multiple Trial Rounds," CID Working Papers 35, Center for International Development at Harvard University.
    13. Waters, George A., 2009. "Chaos in the cobweb model with a new learning dynamic," Journal of Economic Dynamics and Control, Elsevier, vol. 33(6), pages 1201-1216, June.
    14. William L. Cooper & Tito Homem-de-Mello & Anton J. Kleywegt, 2015. "Learning and Pricing with Models That Do Not Explicitly Incorporate Competition," Operations Research, INFORMS, vol. 63(1), pages 86-103, February.
    15. Siegfried Berninghaus & Werner Güth & M. Vittoria Levati & Jianying Qiu, 2006. "Satisficing in sales competition: experimental evidence," Papers on Strategic Interaction 2006-32, Max Planck Institute of Economics, Strategic Interaction Group.
    16. Carlos Alós-Ferrer & Georg Kirchsteiger & Markus Walzl, 2010. "On the Evolution of Market Institutions: The Platform Design Paradox," Economic Journal, Royal Economic Society, vol. 120(543), pages 215-243, March.
    17. Cho, In-Koo, 2005. "Introduction to learning and bounded rationality," Journal of Economic Theory, Elsevier, vol. 124(2), pages 127-128, October.
    18. Ball, Richard, 2017. "Violations of monotonicity in evolutionary models with sample-based beliefs," Economics Letters, Elsevier, vol. 152(C), pages 100-104.
    19. Arcaute, E. & Dyagilev, K. & Johari, R. & Mannor, S., 2013. "Dynamics in tree formation games," Games and Economic Behavior, Elsevier, vol. 79(C), pages 1-29.
    20. Tsakas, Elias & Voorneveld, Mark, 2009. "The target projection dynamic," Games and Economic Behavior, Elsevier, vol. 67(2), pages 708-719, November.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1002691. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.