IDEAS home Printed from https://ideas.repec.org/a/eee/dyncon/v152y2023ics0165188923000763.html
   My bibliography  Save this article

A Gradient-based reinforcement learning model of market equilibration

Author

Listed:
  • He, Zhongzhi (Lawrence)

Abstract

This paper formulates a game-theoretic reinforcement learning model based on the stochastic gradient method whereby players start from their initial circumstances with dispersed information, using the expected gradient to update choice propensities, and converge to the predicted equilibrium of belief-based models. Gradient-based reinforcement learning (G-RL) entails a model-free simulation method to estimate the gradient of expected payoff with respect to choice propensities in repeated games. As the gradient points to the steepest direction towards discovering steady-state equilibrium, G-RL provides a theoretical justification for a probability-weighed time-varying updating rule that optimally balances the trade-off between reinforcing past successful strategies (‘exploitation’) and exploring other strategies (‘exploration’) in choosing actions. The effectiveness and stability of G-RL are demonstrated in a simulated call market, where both the actual effect and the foregone effect are simultaneously updated during market equilibration. In contrast, the failure of payoff-based reinforcement learning (P-RL) is due to its constant-sensitivity updating rule, which causes an imbalance between exploitation and exploration in complex environments.

Suggested Citation

  • He, Zhongzhi (Lawrence), 2023. "A Gradient-based reinforcement learning model of market equilibration," Journal of Economic Dynamics and Control, Elsevier, vol. 152(C).
  • Handle: RePEc:eee:dyncon:v:152:y:2023:i:c:s0165188923000763
    DOI: 10.1016/j.jedc.2023.104670
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0165188923000763
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.jedc.2023.104670?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Fudenberg Drew & Kreps David M., 1993. "Learning Mixed Equilibria," Games and Economic Behavior, Elsevier, vol. 5(3), pages 320-367, July.
    2. Sebastien Pouget, 2007. "Adaptive Traders and the Design of Financial Markets," Journal of Finance, American Finance Association, vol. 62(6), pages 2835-2863, December.
    3. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    4. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
    5. Borgers, Tilman & Sarin, Rajiv, 1997. "Learning Through Reinforcement and Replicator Dynamics," Journal of Economic Theory, Elsevier, vol. 77(1), pages 1-14, November.
    6. Pouget, Sebastien, 2007. "Financial market design and bounded rationality: An experiment," Journal of Financial Markets, Elsevier, vol. 10(3), pages 287-317, August.
    7. , P. & , Peyton, 2006. "Regret testing: learning to play Nash equilibrium without knowing you have an opponent," Theoretical Economics, Econometric Society, vol. 1(3), pages 341-367, September.
    8. Colin Camerer & Teck-Hua Ho, 1999. "Experience-weighted Attraction Learning in Normal Form Games," Econometrica, Econometric Society, vol. 67(4), pages 827-874, July.
    9. Nax, Heinrich H. & Burton-Chellew, Maxwell N. & West, Stuart A. & Young, H. Peyton, 2016. "Learning in a black box," LSE Research Online Documents on Economics 68714, London School of Economics and Political Science, LSE Library.
    10. Nax, Heinrich H. & Burton-Chellew, Maxwell N. & West, Stuart A. & Young, H. Peyton, 2016. "Learning in a black box," Journal of Economic Behavior & Organization, Elsevier, vol. 127(C), pages 1-15.
    11. Jacob K. Goeree & Charles A. Holt & Thomas R. Palfrey, 2016. "Quantal Response Equilibrium:A Stochastic Theory of Games," Economics Books, Princeton University Press, edition 1, number 10743.
    12. Drew Fudenberg & David K. Levine, 1998. "The Theory of Learning in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262061945, April.
    13. Cho, In-Koo & Matsui, Akihiko, 2005. "Learning aspiration in repeated games," Journal of Economic Theory, Elsevier, vol. 124(2), pages 171-201, October.
    14. Jonathan Bendor & Dilip Mookherjee & Debraj Ray, 2001. "Aspiration-Based Reinforcement Learning In Repeated Interaction Games: An Overview," International Game Theory Review (IGTR), World Scientific Publishing Co. Pte. Ltd., vol. 3(02n03), pages 159-174.
    15. Arieli, Itai & Babichenko, Yakov, 2012. "Average testing and Pareto efficiency," Journal of Economic Theory, Elsevier, vol. 147(6), pages 2376-2398.
    16. Vernon L. Smith, 2003. "Constructivist and Ecological Rationality in Economics," American Economic Review, American Economic Association, vol. 93(3), pages 465-508, June.
    17. Young, H. Peyton, 2009. "Learning by trial and error," Games and Economic Behavior, Elsevier, vol. 65(2), pages 626-643, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Masiliūnas, Aidas, 2023. "Learning in rent-seeking contests with payoff risk and foregone payoff information," Games and Economic Behavior, Elsevier, vol. 140(C), pages 50-72.
    2. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, vol. 9(2), pages 1-67, May.
    3. Mohlin, Erik & Östling, Robert & Wang, Joseph Tao-yi, 2020. "Learning by similarity-weighted imitation in winner-takes-all games," Games and Economic Behavior, Elsevier, vol. 120(C), pages 225-245.
    4. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    5. Funai, Naoki, 2022. "Reinforcement learning with foregone payoff information in normal form games," Journal of Economic Behavior & Organization, Elsevier, vol. 200(C), pages 638-660.
    6. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    7. Blume, A. & DeJong, D.V. & Neumann, G. & Savin, N.E., 2000. "Learning and Communication in Sender-Reciever Games : An Economic Investigation," Other publications TiSEM 138dc36b-5269-421a-9e79-b, Tilburg University, School of Economics and Management.
    8. Battalio,R. & Samuelson,L. & Huyck,J. van, 1998. "Risk dominance, payoff dominance and probabilistic choice learning," Working papers 2, Wisconsin Madison - Social Systems.
    9. Mauersberger, Felix, 2019. "Thompson Sampling: Endogenously Random Behavior in Games and Markets," VfS Annual Conference 2019 (Leipzig): 30 Years after the Fall of the Berlin Wall - Democracy and Market Economy 203600, Verein für Socialpolitik / German Economic Association.
    10. Jakub Bielawski & Thiparat Chotibut & Fryderyk Falniowski & Michal Misiurewicz & Georgios Piliouras, 2022. "Unpredictable dynamics in congestion games: memory loss can prevent chaos," Papers 2201.10992, arXiv.org, revised Jan 2022.
    11. Pangallo, Marco & Sanders, James B.T. & Galla, Tobias & Farmer, J. Doyne, 2022. "Towards a taxonomy of learning dynamics in 2 × 2 games," Games and Economic Behavior, Elsevier, vol. 132(C), pages 1-21.
    12. Duffy, John, 2006. "Agent-Based Models and Human Subject Experiments," Handbook of Computational Economics, in: Leigh Tesfatsion & Kenneth L. Judd (ed.), Handbook of Computational Economics, edition 1, volume 2, chapter 19, pages 949-1011, Elsevier.
    13. Dridi, Slimane & Lehmann, Laurent, 2014. "On learning dynamics underlying the evolution of learning rules," Theoretical Population Biology, Elsevier, vol. 91(C), pages 20-36.
    14. Ianni, Antonella, 2014. "Learning strict Nash equilibria through reinforcement," Journal of Mathematical Economics, Elsevier, vol. 50(C), pages 148-155.
    15. Erik Mohlin & Robert Ostling & Joseph Tao-yi Wang, 2014. "Learning by Imitation in Games: Theory, Field, and Laboratory," Economics Series Working Papers 734, University of Oxford, Department of Economics.
    16. Benaïm, Michel & Hofbauer, Josef & Hopkins, Ed, 2009. "Learning in games with unstable equilibria," Journal of Economic Theory, Elsevier, vol. 144(4), pages 1694-1709, July.
    17. DeJong, D.V. & Blume, A. & Neumann, G., 1998. "Learning in Sender-Receiver Games," Other publications TiSEM 4a8b4f46-f30b-4ad2-bb0c-1, Tilburg University, School of Economics and Management.
    18. Jean-François Laslier & Bernard Walliser, 2015. "Stubborn learning," Theory and Decision, Springer, vol. 79(1), pages 51-93, July.
    19. Cason, Timothy N. & Friedman, Daniel & Hopkins, Ed, 2010. "Testing the TASP: An experimental investigation of learning in games with unstable equilibria," Journal of Economic Theory, Elsevier, vol. 145(6), pages 2309-2331, November.
    20. Walter Gutjahr, 2006. "Interaction dynamics of two reinforcement learners," Central European Journal of Operations Research, Springer;Slovak Society for Operations Research;Hungarian Operational Research Society;Czech Society for Operations Research;Österr. Gesellschaft für Operations Research (ÖGOR);Slovenian Society Informatika - Section for Operational Research;Croatian Operational Research Society, vol. 14(1), pages 59-86, February.

    More about this item

    Keywords

    Reinforcement learning; Machine learning; Stochastic gradient method; Model free simulation; Call market; Market equilibration; Exploitation and exploration;
    All these keywords.

    JEL classification:

    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games
    • D81 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Criteria for Decision-Making under Risk and Uncertainty

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:dyncon:v:152:y:2023:i:c:s0165188923000763. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/locate/jedc .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.