IDEAS home Printed from https://ideas.repec.org/a/nat/nathum/v1y2017i9d10.1038_s41562-017-0186-2.html
   My bibliography  Save this article

Letter perception emerges from unsupervised deep learning and recycling of natural image features

Author

Listed:
  • Alberto Testolin

    (University of Padova)

  • Ivilin Stoianov

    (Centre National de la Recherche Scientifique, Aix-Marseille Université
    National Research Council (CNR))

  • Marco Zorzi

    (University of Padova
    IRCCS San Camillo Hospital Foundation)

Abstract

The use of written symbols is a major achievement of human cultural evolution. However, how abstract letter representations might be learned from vision is still an unsolved problem 1,2 . Here, we present a large-scale computational model of letter recognition based on deep neural networks 3,4 , which develops a hierarchy of increasingly more complex internal representations in a completely unsupervised way by fitting a probabilistic, generative model to the visual input 5,6 . In line with the hypothesis that learning written symbols partially recycles pre-existing neuronal circuits for object recognition 7 , earlier processing levels in the model exploit domain-general visual features learned from natural images, while domain-specific features emerge in upstream neurons following exposure to printed letters. We show that these high-level representations can be easily mapped to letter identities even for noise-degraded images, producing accurate simulations of a broad range of empirical findings on letter perception in human observers. Our model shows that by reusing natural visual primitives, learning written symbols only requires limited, domain-specific tuning, supporting the hypothesis that their shape has been culturally selected to match the statistical structure of natural environments 8 .

Suggested Citation

  • Alberto Testolin & Ivilin Stoianov & Marco Zorzi, 2017. "Letter perception emerges from unsupervised deep learning and recycling of natural image features," Nature Human Behaviour, Nature, vol. 1(9), pages 657-664, September.
  • Handle: RePEc:nat:nathum:v:1:y:2017:i:9:d:10.1038_s41562-017-0186-2
    DOI: 10.1038/s41562-017-0186-2
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41562-017-0186-2
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41562-017-0186-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Katherine R. Storrs & Barton L. Anderson & Roland W. Fleming, 2021. "Unsupervised learning predicts human perception and misperception of gloss," Nature Human Behaviour, Nature, vol. 5(10), pages 1402-1417, October.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nathum:v:1:y:2017:i:9:d:10.1038_s41562-017-0186-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.