IDEAS home Printed from https://ideas.repec.org/a/spr/comaot/v8y2002i1d10.1023_a1015128203878.html
   My bibliography  Save this article

Is It Better to Forget? Stimulus-Response, Prediction, and the Weight of Past Experience in a Fast-Paced Bargaining Task

Author

Listed:
  • Faison P. Gibson

    (University of Michigan Business School)

Abstract

Decision makers in dynamic environments such as air traffic control, firefighting, and call center operations adapt in real-time using outcome feedback. Understanding this adaptation is important for influencing and improving the decisions made. Recently, stimulus-response (S-R) learning models have been proposed as explanations for decision makers' adaptation. S-R models hypothesize that decision makers choose an action option based on their anticipation of its success. Decision makers learn by accumulating evidence over action options and combining that evidence with prior expectations. This study examines a standard S-R model and a simple variation of this model, in which past experience may receive an extremely low weight, as explanations for decision makers' adaptation in an evolving Internet-based bargaining environment. In Experiment 1, decision makers are taught to predict behavior in a bargaining task that follows rules that may be the opposite of, congruent to, or unrelated to a second task in which they must choose the deal terms they will offer. Both models provide a good account of the prediction task. However, only the second model, in which decision makers heavily discount all but the most recent past experience, provides a good account of subsequent behavior in the second task. To test whether Experiment 1 artificially related choice behavior and prediction, a second experiment examines both models' predictions concerning the effects of bargaining experience on subsequent prediction. In this study, decision models where long-term experience plays a dominating role do not appear to provide adequate explanations of decision makers' adaptation to their opponent's changing response behavior.

Suggested Citation

  • Faison P. Gibson, 2002. "Is It Better to Forget? Stimulus-Response, Prediction, and the Weight of Past Experience in a Fast-Paced Bargaining Task," Computational and Mathematical Organization Theory, Springer, vol. 8(1), pages 31-47, May.
  • Handle: RePEc:spr:comaot:v:8:y:2002:i:1:d:10.1023_a:1015128203878
    DOI: 10.1023/A:1015128203878
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1023/A:1015128203878
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1023/A:1015128203878?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Fudenberg, Drew & Levine, David, 1998. "Learning in games," European Economic Review, Elsevier, vol. 42(3-5), pages 631-639, May.
    2. Erev, Ido & Roth, Alvin E, 1998. "Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria," American Economic Review, American Economic Association, vol. 88(4), pages 848-881, September.
    3. Gibson, Faison P., 2000. "Feedback Delays: How Can Decision Makers Learn Not to Buy a New Car Every Time the Garage Is Empty?," Organizational Behavior and Human Decision Processes, Elsevier, vol. 83(1), pages 141-166, September.
    4. Roth, Alvin E. & Erev, Ido, 1995. "Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term," Games and Economic Behavior, Elsevier, vol. 8(1), pages 164-212.
    5. Rapoport, Amnon & Erev, Ido & Abraham, Elizabeth V. & Olson, David E., 1997. "Randomization and Adaptive Learning in a Simplified Poker Game," Organizational Behavior and Human Decision Processes, Elsevier, vol. 69(1), pages 31-49, January.
    6. Drew Fudenberg & David K. Levine, 1998. "The Theory of Learning in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262061945, April.
    7. Sterman, John., 1994. "Learning in and about complex systems," Working papers 3660-94., Massachusetts Institute of Technology (MIT), Sloan School of Management.
    8. Gibson, Faison P. & Fichman, Mark & Plaut, David C., 1997. "Learning in Dynamic Decision Tasks: Computational Model and Empirical Evidence," Organizational Behavior and Human Decision Processes, Elsevier, vol. 71(1), pages 1-35, July.
    9. Erev, Ido & Bereby-Meyer, Yoella & Roth, Alvin E., 1999. "The effect of adding a constant to all payoffs: experimental investigation, and implications for reinforcement learning models," Journal of Economic Behavior & Organization, Elsevier, vol. 39(1), pages 111-128, May.
    10. Sterman, John D., 1989. "Misperceptions of feedback in dynamic decision making," Organizational Behavior and Human Decision Processes, Elsevier, vol. 43(3), pages 301-335, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Faison P. Gibson, 2007. "Learning and transfer in dynamic decision environments," Computational and Mathematical Organization Theory, Springer, vol. 13(1), pages 39-61, March.
    2. Faison P. Gibson, 2003. "Supporting Learning in Evolving Dynamic Environments," Computational and Mathematical Organization Theory, Springer, vol. 9(4), pages 305-326, December.
    3. Erev, Ido & Bereby-Meyer, Yoella & Roth, Alvin E., 1999. "The effect of adding a constant to all payoffs: experimental investigation, and implications for reinforcement learning models," Journal of Economic Behavior & Organization, Elsevier, vol. 39(1), pages 111-128, May.
    4. Ido Erev & Eyal Ert & Alvin E. Roth, 2010. "A Choice Prediction Competition for Market Entry Games: An Introduction," Games, MDPI, vol. 1(2), pages 1-20, May.
    5. Duffy, John, 2006. "Agent-Based Models and Human Subject Experiments," Handbook of Computational Economics, in: Leigh Tesfatsion & Kenneth L. Judd (ed.), Handbook of Computational Economics, edition 1, volume 2, chapter 19, pages 949-1011, Elsevier.
    6. Andreas Flache & Michael W. Macy, 2002. "Stochastic Collusion and the Power Law of Learning," Journal of Conflict Resolution, Peace Science Society (International), vol. 46(5), pages 629-653, October.
    7. Ido Erev & Alvin Roth & Robert Slonim & Greg Barron, 2007. "Learning and equilibrium as useful approximations: Accuracy of prediction on randomly selected constant sum games," Economic Theory, Springer;Society for the Advancement of Economic Theory (SAET), vol. 33(1), pages 29-51, October.
    8. Ianni, A., 2002. "Reinforcement learning and the power law of practice: some analytical results," Discussion Paper Series In Economics And Econometrics 203, Economics Division, School of Social Sciences, University of Southampton.
    9. Anthony Ziegelmeyer & Frédéric Koessler & Kene Boun My & Laurent Denant-Boèmont, 2008. "Road Traffic Congestion and Public Information: An Experimental Investigation," Journal of Transport Economics and Policy, University of Bath, vol. 42(1), pages 43-82, January.
    10. DeJong, D.V. & Blume, A. & Neumann, G., 1998. "Learning in Sender-Receiver Games," Other publications TiSEM 4a8b4f46-f30b-4ad2-bb0c-1, Tilburg University, School of Economics and Management.
    11. Michael Foley & Rory Smead & Patrick Forber & Christoph Riedl, 2021. "Avoiding the bullies: The resilience of cooperation among unequals," PLOS Computational Biology, Public Library of Science, vol. 17(4), pages 1-18, April.
    12. Jean-François Laslier & Bernard Walliser, 2015. "Stubborn learning," Theory and Decision, Springer, vol. 79(1), pages 51-93, July.
    13. Walter Gutjahr, 2006. "Interaction dynamics of two reinforcement learners," Central European Journal of Operations Research, Springer;Slovak Society for Operations Research;Hungarian Operational Research Society;Czech Society for Operations Research;Österr. Gesellschaft für Operations Research (ÖGOR);Slovenian Society Informatika - Section for Operational Research;Croatian Operational Research Society, vol. 14(1), pages 59-86, February.
    14. Ed Hopkins, 2002. "Two Competing Models of How People Learn in Games," Econometrica, Econometric Society, vol. 70(6), pages 2141-2166, November.
    15. Franke, Reiner, 2003. "Reinforcement learning in the El Farol model," Journal of Economic Behavior & Organization, Elsevier, vol. 51(3), pages 367-388, July.
    16. Arifovic, Jasmina & Karaivanov, Alexander, 2010. "Learning by doing vs. learning from others in a principal-agent model," Journal of Economic Dynamics and Control, Elsevier, vol. 34(10), pages 1967-1992, October.
    17. Dürsch, Peter & Kolb, Albert & Oechssler, Jörg & Schipper, Burkhard C., 2005. "Rage Against the Machines: How Subjects Learn to Play Against Computers," Discussion Paper Series of SFB/TR 15 Governance and the Efficiency of Economic Systems 63, Free University of Berlin, Humboldt University of Berlin, University of Bonn, University of Mannheim, University of Munich.
    18. Martino Banchio & Giacomo Mantegazza, 2022. "Artificial Intelligence and Spontaneous Collusion," Papers 2202.05946, arXiv.org, revised Sep 2023.
    19. Gary Charness & Dan Levin, 2003. "Bayesian Updating vs. Reinforcement and Affect: A Laboratory Study," Levine's Bibliography 666156000000000180, UCLA Department of Economics.
    20. Martin G. Kocher & Matthias Sutter, 2005. "The Decision Maker Matters: Individual Versus Group Behaviour in Experimental Beauty-Contest Games," Economic Journal, Royal Economic Society, vol. 115(500), pages 200-223, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:comaot:v:8:y:2002:i:1:d:10.1023_a:1015128203878. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.