IDEAS home Printed from https://ideas.repec.org/a/eee/spapps/v119y2009i2p373-390.html
   My bibliography  Save this article

Learning to signal: Analysis of a micro-level reinforcement model

Author

Listed:
  • Argiento, Raffaele
  • Pemantle, Robin
  • Skyrms, Brian
  • Volkov, Stanislav

Abstract

We consider the following signaling game. Nature plays first from the set {1,2}. Player 1 (the Sender) sees this and plays from the set {A,B}. Player 2 (the Receiver) sees only Player 1's play and plays from the set {1,2}. Both players win if Player 2's play equals Nature's play and lose otherwise. Players are told whether they have won or lost, and the game is repeated. An urn scheme for learning coordination in this game is as follows. Each node of the decision tree for Players 1 and 2 contains an urn with balls of two colors for the two possible decisions. Players make decisions by drawing from the appropriate urns. After a win, each ball that was drawn is reinforced by adding another of the same color to the urn. A number of equilibria are possible for this game other than the optimal ones. However, we show that the urn scheme achieves asymptotically optimal coordination.

Suggested Citation

  • Argiento, Raffaele & Pemantle, Robin & Skyrms, Brian & Volkov, Stanislav, 2009. "Learning to signal: Analysis of a micro-level reinforcement model," Stochastic Processes and their Applications, Elsevier, vol. 119(2), pages 373-390, February.
  • Handle: RePEc:eee:spapps:v:119:y:2009:i:2:p:373-390
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0304-4149(08)00042-2
    Download Restriction: Full text for ScienceDirect subscribers only
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Bonacich, Phillip & Liggett, Thomas M., 2003. "Asymptotics of a matrix valued Markov chain arising in sociology," Stochastic Processes and their Applications, Elsevier, vol. 104(1), pages 155-171, March.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Penélope Hernández & Bernhard von Stengel, 2014. "Nash Codes for Noisy Channels," Operations Research, INFORMS, vol. 62(6), pages 1221-1235, December.
    2. Zachary Fulker & Patrick Forber & Rory Smead & Christoph Riedl, 2022. "Spontaneous emergence of groups and signaling diversity in dynamic networks," Papers 2210.17309, arXiv.org, revised Jan 2024.
    3. Jason McKenzie Alexander & Brian Skyrms & Sandy Zabell, 2012. "Inventing New Signals," Dynamic Games and Applications, Springer, vol. 2(1), pages 129-145, March.
    4. Conor Mayo-Wilson & Kevin Zollman & David Danks, 2013. "Wisdom of crowds versus groupthink: learning in groups and in isolation," International Journal of Game Theory, Springer;Game Theory Society, vol. 42(3), pages 695-723, August.
    5. Jonathan Newton, 2018. "Evolutionary Game Theory: A Renaissance," Games, MDPI, vol. 9(2), pages 1-67, May.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Liggett, Thomas M. & Rolles, Silke W. W., 2004. "An infinite stochastic model of social network formation," Stochastic Processes and their Applications, Elsevier, vol. 113(1), pages 65-80, September.
    2. Matthias Greiff, 2013. "Rewards and the private provision of public goods on dynamic networks," Journal of Evolutionary Economics, Springer, vol. 23(5), pages 1001-1021, November.
    3. Irene Crimaldi & Pierre-Yves Louis & Ida Minelli, 2020. "Interacting non-linear reinforced stochastic processes: Synchronization and no-synchronization," Working Papers hal-02910341, HAL.
    4. Pemantle, Robin & Skyrms, Brian, 2004. "Network formation by reinforcement learning: the long and medium run," Mathematical Social Sciences, Elsevier, vol. 48(3), pages 315-327, November.
    5. Brian Skyrms & Robin Pemantle, 2004. "Learning to Network," Levine's Bibliography 122247000000000436, UCLA Department of Economics.
    6. Georgios Chasparis & Jeff Shamma & Anders Rantzer, 2015. "Nonconvergence to saddle boundary points under perturbed reinforcement learning," International Journal of Game Theory, Springer;Game Theory Society, vol. 44(3), pages 667-699, August.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:spapps:v:119:y:2009:i:2:p:373-390. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: http://www.elsevier.com/wps/find/journaldescription.cws_home/505572/description#description .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.