IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1004953.html
   My bibliography  Save this article

The Computational Development of Reinforcement Learning during Adolescence

Author

Listed:
  • Stefano Palminteri
  • Emma J Kilford
  • Giorgio Coricelli
  • Sarah-Jayne Blakemore

Abstract

Adolescence is a period of life characterised by changes in learning and decision-making. Learning and decision-making do not rely on a unitary system, but instead require the coordination of different cognitive processes that can be mathematically formalised as dissociable computational modules. Here, we aimed to trace the developmental time-course of the computational modules responsible for learning from reward or punishment, and learning from counterfactual feedback. Adolescents and adults carried out a novel reinforcement learning paradigm in which participants learned the association between cues and probabilistic outcomes, where the outcomes differed in valence (reward versus punishment) and feedback was either partial or complete (either the outcome of the chosen option only, or the outcomes of both the chosen and unchosen option, were displayed). Computational strategies changed during development: whereas adolescents’ behaviour was better explained by a basic reinforcement learning algorithm, adults’ behaviour integrated increasingly complex computational features, namely a counterfactual learning module (enabling enhanced performance in the presence of complete feedback) and a value contextualisation module (enabling symmetrical reward and punishment learning). Unlike adults, adolescent performance did not benefit from counterfactual (complete) feedback. In addition, while adults learned symmetrically from both reward and punishment, adolescents learned from reward but were less likely to learn from punishment. This tendency to rely on rewards and not to consider alternative consequences of actions might contribute to our understanding of decision-making in adolescence.Author Summary: We employed a novel learning task to investigate how adolescents and adults learn from reward versus punishment, and to counterfactual feedback about decisions. Computational analyses revealed that adults and adolescents did not implement the same algorithm to solve the learning task. In contrast to adults, adolescents’ performance did not take into account counterfactual information; adolescents also learned preferentially to seek rewards rather than to avoid punishments, whereas adults learned to seek and avoid both equally. Increasing our understanding of computational changes in reinforcement learning during adolescence may provide insights into adolescent value-based decision-making. Our results might also have implications for education, since they suggest that adolescents benefit more from positive feedback than from negative feedback in learning tasks.

Suggested Citation

  • Stefano Palminteri & Emma J Kilford & Giorgio Coricelli & Sarah-Jayne Blakemore, 2016. "The Computational Development of Reinforcement Learning during Adolescence," PLOS Computational Biology, Public Library of Science, vol. 12(6), pages 1-25, June.
  • Handle: RePEc:plo:pcbi00:1004953
    DOI: 10.1371/journal.pcbi.1004953
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004953
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004953&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1004953?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Nature Communications, Nature, vol. 6(1), pages 1-14, November.
    2. Mathias Pessiglione & Ben Seymour & Guillaume Flandin & Raymond J. Dolan & Chris D. Frith, 2006. "Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans," Nature, Nature, vol. 442(7106), pages 1042-1045, August.
    3. Jean Daunizeau & Vincent Adam & Lionel Rigoux, 2014. "VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data," PLOS Computational Biology, Public Library of Science, vol. 10(1), pages 1-16, January.
    4. Stefano Palminteri & Mehdi Khamassi & Mateus Joffily & Giorgio Coricelli, 2015. "Contextual modulation of value signals in reward and punishment learning," Post-Print halshs-01236045, HAL.
    5. Colin F. Camerer & Teck-Hua Ho & Juin-Kuan Chong, 2004. "A Cognitive Hierarchy Model of Games," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 119(3), pages 861-898.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Marieke Jepma & Jessica V Schaaf & Ingmar Visser & Hilde M Huizenga, 2020. "Uncertainty-driven regulation of learning and exploration in adolescents: A computational account," PLOS Computational Biology, Public Library of Science, vol. 16(9), pages 1-29, September.
    2. Maël Lebreton & Karin Bacily & Stefano Palminteri & Jan B Engelmann, 2019. "Contextual influence on confidence judgments in human reinforcement learning," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-27, April.
    3. Caroline J. Charpentier & Qianying Wu & Seokyoung Min & Weilun Ding & Jeffrey Cockburn & John P. O’Doherty, 2024. "Heterogeneity in strategy use during arbitration between experiential and observational learning," Nature Communications, Nature, vol. 15(1), pages 1-20, December.
    4. Stefano Palminteri & Germain Lefebvre & Emma J Kilford & Sarah-Jayne Blakemore, 2017. "Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-22, August.
    5. Ruth Pauli & Inti A. Brazil & Gregor Kohls & Miriam C. Klein-Flügge & Jack C. Rogers & Dimitris Dikeos & Roberta Dochnal & Graeme Fairchild & Aranzazu Fernández-Rivas & Beate Herpertz-Dahlmann & Amaia, 2023. "Action initiation and punishment learning differ from childhood to adolescence while reward learning remains stable," Nature Communications, Nature, vol. 14(1), pages 1-15, December.
    6. Anna P. Giron & Simon Ciranka & Eric Schulz & Wouter Bos & Azzurra Ruggeri & Björn Meder & Charley M. Wu, 2023. "Developmental changes in exploration resemble stochastic optimization," Nature Human Behaviour, Nature, vol. 7(11), pages 1955-1967, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Antoine Collomb-Clerc & Maëlle C. M. Gueguen & Lorella Minotti & Philippe Kahane & Vincent Navarro & Fabrice Bartolomei & Romain Carron & Jean Regis & Stephan Chabardès & Stefano Palminteri & Julien B, 2023. "Human thalamic low-frequency oscillations correlate with expected value and outcomes during reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-10, December.
    2. Maël Lebreton & Karin Bacily & Stefano Palminteri & Jan B Engelmann, 2019. "Contextual influence on confidence judgments in human reinforcement learning," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-27, April.
    3. Stefano Palminteri & Germain Lefebvre & Emma J Kilford & Sarah-Jayne Blakemore, 2017. "Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing," PLOS Computational Biology, Public Library of Science, vol. 13(8), pages 1-22, August.
    4. Lefebvre, Germain & Nioche, Aurélien & Bourgeois-Gironde, Sacha & Palminteri, Stefano, 2018. "An Empirical Investigation of the Emergence of Money: Contrasting Temporal Difference and Opportunity Cost Reinforcement Learning," MPRA Paper 85586, University Library of Munich, Germany.
    5. Chih-Chung Ting & Nahuel Salem-Garcia & Stefano Palminteri & Jan B. Engelmann & Maël Lebreton, 2023. "Neural and computational underpinnings of biased confidence in human reinforcement learning," Nature Communications, Nature, vol. 14(1), pages 1-18, December.
    6. Johann Lussange & Stefano Vrizzi & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2023. "Stock Price Formation: Precepts from a Multi-Agent Reinforcement Learning Model," Computational Economics, Springer;Society for Computational Economics, vol. 61(4), pages 1523-1544, April.
    7. Johann Lussange & Ivan Lazarevich & Sacha Bourgeois-Gironde & Stefano Palminteri & Boris Gutkin, 2021. "Modelling Stock Markets by Multi-agent Reinforcement Learning," Computational Economics, Springer;Society for Computational Economics, vol. 57(1), pages 113-147, January.
    8. M. A. Pisauro & E. F. Fouragnan & D. H. Arabadzhiyska & M. A. J. Apps & M. G. Philiastides, 2022. "Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    9. He A Xu & Alireza Modirshanechi & Marco P Lehmann & Wulfram Gerstner & Michael H Herzog, 2021. "Novelty is not surprise: Human exploratory and adaptive behavior in sequential decision-making," PLOS Computational Biology, Public Library of Science, vol. 17(6), pages 1-32, June.
    10. Koen M. M. Frolichs & Gabriela Rosenblau & Christoph W. Korn, 2022. "Incorporating social knowledge structures into computational models," Nature Communications, Nature, vol. 13(1), pages 1-18, December.
    11. Johann Lussange & Boris Gutkin, 2023. "Order book regulatory impact on stock market quality: a multi-agent reinforcement learning perspective," Papers 2302.04184, arXiv.org.
    12. Romane Cecchi & Antoine Collomb-Clerc & Inès Rachidi & Lorella Minotti & Philippe Kahane & Mathias Pessiglione & Julien Bastin, 2024. "Direct stimulation of anterior insula and ventromedial prefrontal cortex disrupts economic choices," Nature Communications, Nature, vol. 15(1), pages 1-11, December.
    13. Daniel Serra, 2021. "Decision-making: from neuroscience to neuroeconomics—an overview," Theory and Decision, Springer, vol. 91(1), pages 1-80, July.
    14. Wei-Hsiang Lin & Justin L Gardner & Shih-Wei Wu, 2020. "Context effects on probability estimation," PLOS Biology, Public Library of Science, vol. 18(3), pages 1-45, March.
    15. Rémi Philippe & Rémi Janet & Koosha Khalvati & Rajesh P. N. Rao & Daeyeol Lee & Jean-Claude Dreher, 2024. "Neurocomputational mechanisms involved in adaptation to fluctuating intentions of others," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    16. Marie Devaine & Guillaume Hollard & Jean Daunizeau, 2014. "The Social Bayesian Brain: Does Mentalizing Make a Difference When We Learn?," PLOS Computational Biology, Public Library of Science, vol. 10(12), pages 1-14, December.
    17. Mikhail S. Spektor & Hannah Seidler, 2022. "Violations of economic rationality due to irrelevant information during learning in decision from experience," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 17(2), pages 425-448, March.
    18. Simon Ciranka & Juan Linde-Domingo & Ivan Padezhki & Clara Wicharz & Charley M. Wu & Bernhard Spitzer, 2022. "Asymmetric reinforcement learning facilitates human inference of transitive relations," Nature Human Behaviour, Nature, vol. 6(4), pages 555-564, April.
    19. Lou Safra & Coralie Chevallier & Stefano Palminteri, 2019. "Depressive symptoms are associated with blunted reward learning in social contexts," PLOS Computational Biology, Public Library of Science, vol. 15(7), pages 1-22, July.
    20. repec:cup:judgdm:v:17:y:2022:i:2:p:425-448 is not listed on IDEAS
    21. Bosch-Domènech, Antoni & Vriend, Nicolaas J., 2013. "On the role of non-equilibrium focal points as coordination devices," Journal of Economic Behavior & Organization, Elsevier, vol. 94(C), pages 52-67.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1004953. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.