IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1005630.html
   My bibliography  Save this article

Optimal structure of metaplasticity for adaptive learning

Author

Listed:
  • Peyman Khorsand
  • Alireza Soltani

Abstract

Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning.Author summary: Successful learning from our experience and feedback from the environment requires that the reward value assigned to a given option or action to be updated by a precise amount after each feedback. In the standard model for reward-based learning known as reinforcement learning, the learning rates determine the strength of such update. A large learning rate allows fast update of values (large adaptability) but introduces noise (small precision), whereas a small learning rate does the opposite. Thus, learning seems to be bounded by a tradeoff between adaptability and precision. Here, we asked whether there are synaptic mechanisms that are capable of adjusting the brain’s level of plasticity according to reward statistics, and, therefore, allow the learning process to be adaptive. We showed that metaplasticity, changes in the synaptic state that shape future synaptic modifications without any observable changes in the strength of synapses, could provide such a mechanism and furthermore, identified the optimal structure of such metaplasticity. We propose that metaplasticity, which sometimes causes no observable changes in behavior and thus could be perceived as a lack of learning, can provide a robust mechanism for adaptive learning.

Suggested Citation

  • Peyman Khorsand & Alireza Soltani, 2017. "Optimal structure of metaplasticity for adaptive learning," PLOS Computational Biology, Public Library of Science, vol. 13(6), pages 1-22, June.
  • Handle: RePEc:plo:pcbi00:1005630
    DOI: 10.1371/journal.pcbi.1005630
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005630
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1005630&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1005630?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Seneta, E., 1993. "Sensitivity of finite Markov chains under perturbation," Statistics & Probability Letters, Elsevier, vol. 17(2), pages 163-168, May.
    2. Elise Payzan-LeNestour & Peter Bossaerts, 2011. "Risk, Unexpected Uncertainty, and Estimation Uncertainty: Bayesian Learning in Unstable Settings," PLOS Computational Biology, Public Library of Science, vol. 7(1), pages 1-14, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Payam Piray & Nathaniel D. Daw, 2021. "A model for learning based on the joint estimation of stochasticity and volatility," Nature Communications, Nature, vol. 12(1), pages 1-16, December.
    2. Micha Heilbron & Florent Meyniel, 2019. "Confidence resets reveal hierarchical adaptive learning in humans," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-24, April.
    3. Shiva Farashahi & Alireza Soltani, 2021. "Computational mechanisms of distributed value representations and mixed learning strategies," Nature Communications, Nature, vol. 12(1), pages 1-18, December.
    4. Payam Piray & Nathaniel D Daw, 2020. "A simple model for learning in volatile environments," PLOS Computational Biology, Public Library of Science, vol. 16(7), pages 1-26, July.
    5. Payam Piray & Nathaniel D. Daw, 2024. "Computational processes of simultaneous learning of stochasticity and volatility in humans," Nature Communications, Nature, vol. 15(1), pages 1-16, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Micha Heilbron & Florent Meyniel, 2019. "Confidence resets reveal hierarchical adaptive learning in humans," PLOS Computational Biology, Public Library of Science, vol. 15(4), pages 1-24, April.
    2. Payam Piray & Nathaniel D Daw, 2020. "A simple model for learning in volatile environments," PLOS Computational Biology, Public Library of Science, vol. 16(7), pages 1-26, July.
    3. Hu, Yingyao & Kayaba, Yutaka & Shum, Matthew, 2013. "Nonparametric learning rules from bandit experiments: The eyes have it!," Games and Economic Behavior, Elsevier, vol. 81(C), pages 215-231.
    4. Mateus Joffily & Giorgio Coricelli, 2013. "Emotional Valence and the Free-Energy Principle," Post-Print halshs-00834063, HAL.
    5. Daniel S Kluger & Nico Broers & Marlen A Roehe & Moritz F Wurm & Niko A Busch & Ricarda I Schubotz, 2020. "Exploitation of local and global information in predictive processing," PLOS ONE, Public Library of Science, vol. 15(4), pages 1-17, April.
    6. Dimitrije Marković & Andrea M F Reiter & Stefan J Kiebel, 2019. "Predicting change: Approximate inference under explicit representation of temporal structure in changing environments," PLOS Computational Biology, Public Library of Science, vol. 15(1), pages 1-31, January.
    7. Vahid Moosavi & Giulio Isacchini, 2016. "A Markovian Model of the Evolving World Input-Output Network," Papers 1612.06186, arXiv.org, revised Sep 2017.
    8. Cruz, Juan Alberto Rojas, 2020. "Sensitivity of the stationary distributions of denumerable Markov chains," Statistics & Probability Letters, Elsevier, vol. 166(C).
    9. Eilon Solan & Nicolas Vieille, 2002. "Perturbed Markov Chains," Discussion Papers 1342, Northwestern University, Center for Mathematical Studies in Economics and Management Science.
    10. Sam Gijsen & Miro Grundei & Robert T Lange & Dirk Ostwald & Felix Blankenburg, 2021. "Neural surprise in somatosensory Bayesian learning," PLOS Computational Biology, Public Library of Science, vol. 17(2), pages 1-36, February.
    11. Philipp Schustek & Rubén Moreno-Bote, 2018. "Instance-based generalization for human judgments about uncertainty," PLOS Computational Biology, Public Library of Science, vol. 14(6), pages 1-27, June.
    12. Jill X O'Reilly & Saad Jbabdi & Matthew F S Rushworth & Timothy E J Behrens, 2013. "Brain Systems for Probabilistic and Dynamic Prediction: Computational Specificity and Integration," PLOS Biology, Public Library of Science, vol. 11(9), pages 1-14, September.
    13. Vahid Moosavi & Giulio Isacchini, 2017. "A Markovian model of evolving world input-output network," PLOS ONE, Public Library of Science, vol. 12(10), pages 1-18, October.
    14. Maria Gamboa & Maria Jesus Lopez-Herrero, 2020. "The Effect of Setting a Warning Vaccination Level on a Stochastic SIVS Model with Imperfect Vaccine," Mathematics, MDPI, vol. 8(7), pages 1-23, July.
    15. Sang Wan Lee & John P O’Doherty & Shinsuke Shimojo, 2015. "Neural Computations Mediating One-Shot Learning in the Human Brain," PLOS Biology, Public Library of Science, vol. 13(4), pages 1-36, April.
    16. P.-C.G. Vassiliou, 2021. "Non-Homogeneous Markov Set Systems," Mathematics, MDPI, vol. 9(5), pages 1-25, February.
    17. Nazanin Mohammadi Sepahvand & Elisabeth Stöttinger & James Danckert & Britt Anderson, 2014. "Sequential Decisions: A Computational Comparison of Observational and Reinforcement Accounts," PLOS ONE, Public Library of Science, vol. 9(4), pages 1-8, April.
    18. Fletcher, Cameron S. & Ganegodage, K. Renuka & Hildenbrand, Marian D. & Rambaldi, Alicia N., 2022. "The behaviour of property prices when affected by infrequent floods," Land Use Policy, Elsevier, vol. 122(C).
    19. Florent Meyniel & Daniel Schlunegger & Stanislas Dehaene, 2015. "The Sense of Confidence during Probabilistic Learning: A Normative Account," PLOS Computational Biology, Public Library of Science, vol. 11(6), pages 1-25, June.
    20. Bruno B Averbeck, 2015. "Theory of Choice in Bandit, Information Sampling and Foraging Tasks," PLOS Computational Biology, Public Library of Science, vol. 11(3), pages 1-28, March.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1005630. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.