IDEAS home Printed from https://ideas.repec.org/a/plo/pcbi00/1004566.html
   My bibliography  Save this article

Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons

Author

Listed:
  • Kendra S Burbank

Abstract

The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.Author Summary: In the brain areas responsible for sensory processing, neurons learn over time to respond to specific features in the external world. Here, we propose a new, biologically plausible model for how groups of neurons can learn which specific features to respond to. Our work connects theoretical arguments about the optimal forms of neuronal representations with experimental results showing how synaptic connections change in response to neuronal activity. Specifically, we show that biologically realistic neurons can implement an algorithm known as autoencoder learning, in which the neurons learn to form representations that can be used to reconstruct their inputs. Autoencoder networks can successfully model neuronal responses in early sensory areas, and they are also frequently used in machine learning for training deep neural networks. Despite their power and utility, autoencoder networks have not been previously implemented in a fully biological fashion. To perform the autoencoder algorithm, neurons must modify their incoming, feedforward synaptic connections as well as their outgoing, feedback synaptic connections—and the changes to both must depend on the errors the network makes when it tries to reconstruct its input. Here, we propose a model for activity in the network and show that the commonly used spike-timing-dependent plasticity paradigm will implement the desired changes to feedforward synaptic connection weights. Critically, we use recent experimental evidence to propose that feedback connections learn according to a temporally reversed plasticity rule. We show mathematically that the two rules combined can approximately implement autoencoder learning, and confirm our results using simulated networks of integrate-and-fire neurons. By showing that biological neurons can implement this powerful algorithm, our work opens the door for the modeling of many learning paradigms from both the fields of computational neuroscience and machine learning.

Suggested Citation

  • Kendra S Burbank, 2015. "Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons," PLOS Computational Biology, Public Library of Science, vol. 11(12), pages 1-25, December.
  • Handle: RePEc:plo:pcbi00:1004566
    DOI: 10.1371/journal.pcbi.1004566
    as

    Download full text from publisher

    File URL: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004566
    Download Restriction: no

    File URL: https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1004566&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcbi.1004566?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Francis Crick & Christof Koch, 1998. "Constraints on cortical and thalamic projections: the no-strong-loops hypothesis," Nature, Nature, vol. 391(6664), pages 245-250, January.
    2. Gina G. Turrigiano & Kenneth R. Leslie & Niraj S. Desai & Lana C. Rutherford & Sacha B. Nelson, 1998. "Activity-dependent scaling of quantal amplitude in neocortical neurons," Nature, Nature, vol. 391(6670), pages 892-896, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Matteo Saponati & Martin Vinck, 2023. "Sequence anticipation and spike-timing-dependent plasticity emerge from a predictive learning rule," Nature Communications, Nature, vol. 14(1), pages 1-13, December.
    2. Niranjan Chakravarthy & Shivkumar Sabesan & Kostas Tsakalis & Leon Iasemidis, 2009. "Controlling epileptic seizures in a neural mass model," Journal of Combinatorial Optimization, Springer, vol. 17(1), pages 98-116, January.
    3. Damien M O’Halloran, 2020. "Simulation model of CA1 pyramidal neurons reveal opposing roles for the Na+/Ca2+ exchange current and Ca2+-activated K+ current during spike-timing dependent synaptic plasticity," PLOS ONE, Public Library of Science, vol. 15(3), pages 1-12, March.
    4. Sacha Jennifer van Albada & Moritz Helias & Markus Diesmann, 2015. "Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations," PLOS Computational Biology, Public Library of Science, vol. 11(9), pages 1-37, September.
    5. Christian Keck & Cristina Savin & Jörg Lücke, 2012. "Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?," PLOS Computational Biology, Public Library of Science, vol. 8(3), pages 1-15, March.
    6. Iris Reuveni & Sourav Ghosh & Edi Barkai, 2017. "Real Time Multiplicative Memory Amplification Mediated by Whole-Cell Scaling of Synaptic Response in Key Neurons," PLOS Computational Biology, Public Library of Science, vol. 13(1), pages 1-31, January.
    7. Mizusaki, Beatriz E.P. & Agnes, Everton J. & Erichsen, Rubem & Brunnet, Leonardo G., 2017. "Learning and retrieval behavior in recurrent neural networks with pre-synaptic dependent homeostatic plasticity," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 479(C), pages 279-286.
    8. Giorgia Dellaferrera & Stanisław Woźniak & Giacomo Indiveri & Angeliki Pantazi & Evangelos Eleftheriou, 2022. "Introducing principles of synaptic integration in the optimization of deep neural networks," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    9. Aseel Shomar & Lukas Geyrhofer & Noam E Ziv & Naama Brenner, 2017. "Cooperative stochastic binding and unbinding explain synaptic size dynamics and statistics," PLOS Computational Biology, Public Library of Science, vol. 13(7), pages 1-24, July.
    10. John Palmer & Adam Keane & Pulin Gong, 2017. "Learning and executing goal-directed choices by internally generated sequences in spiking neural circuits," PLOS Computational Biology, Public Library of Science, vol. 13(7), pages 1-23, July.
    11. Angulo-Garcia, David & Torcini, Alessandro, 2014. "Stable chaos in fluctuation driven neural circuits," Chaos, Solitons & Fractals, Elsevier, vol. 69(C), pages 233-245.
    12. Tiziano D’Albis & Richard Kempter, 2017. "A single-cell spiking model for the origin of grid-cell patterns," PLOS Computational Biology, Public Library of Science, vol. 13(10), pages 1-41, October.
    13. Juan Prada & Manju Sasi & Corinna Martin & Sibylle Jablonka & Thomas Dandekar & Robert Blum, 2018. "An open source tool for automatic spatiotemporal assessment of calcium transients and local ‘signal-close-to-noise’ activity in calcium imaging data," PLOS Computational Biology, Public Library of Science, vol. 14(3), pages 1-34, March.
    14. Maxime Lemieux & Narges Karimi & Frederic Bretzner, 2024. "Functional plasticity of glutamatergic neurons of medullary reticular nuclei after spinal cord injury in mice," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    15. Pierre Yger & Kenneth D Harris, 2013. "The Convallis Rule for Unsupervised Learning in Cortical Networks," PLOS Computational Biology, Public Library of Science, vol. 9(10), pages 1-16, October.
    16. Jannis Born & Juan M Galeazzi & Simon M Stringer, 2017. "Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system," PLOS ONE, Public Library of Science, vol. 12(5), pages 1-35, May.
    17. Chiara Bartolozzi & Giacomo Indiveri & Elisa Donati, 2022. "Embodied neuromorphic intelligence," Nature Communications, Nature, vol. 13(1), pages 1-14, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcbi00:1004566. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ploscompbiol (email available below). General contact details of provider: https://journals.plos.org/ploscompbiol/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.