IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v10y2019i1d10.1038_s41467-019-11786-6.html
   My bibliography  Save this article

A critique of pure learning and what artificial neural networks can learn from animal brains

Author

Listed:
  • Anthony M. Zador

    (Cold Spring Harbor Laboratory)

Abstract

Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.

Suggested Citation

  • Anthony M. Zador, 2019. "A critique of pure learning and what artificial neural networks can learn from animal brains," Nature Communications, Nature, vol. 10(1), pages 1-7, December.
  • Handle: RePEc:nat:natcom:v:10:y:2019:i:1:d:10.1038_s41467-019-11786-6
    DOI: 10.1038/s41467-019-11786-6
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-019-11786-6
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-019-11786-6?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Joachim A Holst-Hansen & Carsten Bergenholtz, 2020. "Does the size of rewards influence performance in cognitively demanding tasks?," PLOS ONE, Public Library of Science, vol. 15(10), pages 1-15, October.
    2. Christopher R. Madan, 2020. "Considerations for Comparing Video Game AI Agents with Humans," Challenges, MDPI, vol. 11(2), pages 1-12, August.
    3. Federico Bolaños & Javier G. Orlandi & Ryo Aoki & Akshay V. Jagadeesh & Justin L. Gardner & Andrea Benucci, 2024. "Efficient coding of natural images in the mouse visual cortex," Nature Communications, Nature, vol. 15(1), pages 1-17, December.
    4. Bao, Han & Yu, Xihong & Zhang, Yunzhen & Liu, Xiaofeng & Chen, Mo, 2023. "Initial condition-offset regulating synchronous dynamics and energy diversity in a memristor-coupled network of memristive HR neurons," Chaos, Solitons & Fractals, Elsevier, vol. 177(C).
    5. Francesco Poli & Yi-Lin Li & Pravallika Naidu & Rogier B. Mars & Sabine Hunnius & Azzurra Ruggeri, 2024. "Toddlers strategically adapt their information search," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    6. Barbara Feulner & Matthew G. Perich & Raeed H. Chowdhury & Lee E. Miller & Juan A. Gallego & Claudia Clopath, 2022. "Small, correlated changes in synaptic connectivity may facilitate rapid motor learning," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    7. Pomeroy, Brett & Grilc, Miha & Likozar, Blaž, 2022. "Artificial neural networks for bio-based chemical production or biorefining: A review," Renewable and Sustainable Energy Reviews, Elsevier, vol. 153(C).
    8. Dániel L. Barabási & Gregor F. P. Schuhknecht & Florian Engert, 2024. "Functional neuronal circuits emerge in the absence of developmental activity," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    9. Bossert, Leonie & Hagendorff, Thilo, 2021. "Animals and AI. The role of animals in AI research and application – An overview and ethical evaluation," Technology in Society, Elsevier, vol. 67(C).
    10. Alexander Ororbia & Daniel Kifer, 2022. "The neural coding framework for learning generative models," Nature Communications, Nature, vol. 13(1), pages 1-14, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:10:y:2019:i:1:d:10.1038_s41467-019-11786-6. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.