IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v389y1997i6651d10.1038_39309.html
   My bibliography  Save this article

How the brain learns to see objects and faces in an impoverished context

Author

Listed:
  • R. J. Dolan

    (Institute of Neurology
    Royal Free Hospital School of Medicine)

  • G. R. Fink

    (Institute of Neurology)

  • E. Rolls

    (University of Oxford)

  • M. Booth

    (University of Oxford)

  • A. Holmes

    (Institute of Neurology)

  • R. S. J. Frackowiak

    (Institute of Neurology)

  • K. J. Friston

    (Institute of Neurology)

Abstract

A degraded image of an object or face, which appearsmeaningless when seen for the first time, is easily recognizableafter viewing an undegraded version of the same image1. The neural mechanisms by whichthis form of rapid perceptual learning facilitates perception are notwell understood. Psychological theory suggests the involvementof systems for processing stimulus attributes, spatial attentionand feature binding2,as well as those involved in visual imagery3. Here we investigate where andhow this rapid perceptual learning is expressed in the human brain byusing functional neuroimaging to measure brain activity duringexposure to degraded images before and after exposure to thecorresponding undegraded versions (Fig. 1). Perceptuallearning of faces or objects enhanced the activity of inferiortemporal regions known to be involved in face and object recognitionrespectively46. In addition, both faceand object learning led to increased activity in medial and lateralparietal regions that have been implicated in attention7 and visual imagery8. We observed a strong couplingbetween the temporal face area and the medial parietal cortexwhen, and only when, faces were perceived. Thissuggests that perceptual learning involves direct interactions betweenareas involved in face recognition and those involved in spatialattention, feature binding and memoryrecall. Figure 1 The experimental design. Binarized images of an object (top row) and face (bottom row) and their associated full grey-scale versions. The process of binarizing images involved transforming grey-scale levels into either black or white (two-tone) with values of either 0 or 1. Exposure to the associated grey-scale version took place 5 min before a second exposure to the two-tone version in study 1. In the pre- and post-learning scans, the subjects were told that they might see faces or objects, respectively, in the stimuli. A significant behavioural learning effect, operationally defined as the facilitation of performance by prior exposure to the grey-scale version, was evident for the object and face conditions. Behavioural data were collected for all conditions at the end of each individual scan. The mean number of resolved percepts pre- and post-exposure to the grey-scale images were 13 and 87% in the object, and 55 and 93% in face conditions, respectively. In study 2, the same sequence and stimuli were used but with pre- and post-exposure being interposed with non-related grey-scale images of objects and faces. Here the mean number of resolved percepts pre- and post-exposure to non-associated grey-scale images were 8 and 10% in the object, and 19 and 25% in the face conditions, respectively.

Suggested Citation

  • R. J. Dolan & G. R. Fink & E. Rolls & M. Booth & A. Holmes & R. S. J. Frackowiak & K. J. Friston, 1997. "How the brain learns to see objects and faces in an impoverished context," Nature, Nature, vol. 389(6651), pages 596-599, October.
  • Handle: RePEc:nat:nature:v:389:y:1997:i:6651:d:10.1038_39309
    DOI: 10.1038/39309
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/39309
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/39309?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Richard Hardstone & Michael Zhu & Adeen Flinker & Lucia Melloni & Sasha Devore & Daniel Friedman & Patricia Dugan & Werner K. Doyle & Orrin Devinsky & Biyu J. He, 2021. "Long-term priors influence visual perception through recruitment of long-range feedback," Nature Communications, Nature, vol. 12(1), pages 1-15, December.
    2. Tsutomu Murata & Takashi Hamada & Tetsuya Shimokawa & Manabu Tanifuji & Toshio Yanagida, 2014. "Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images," PLOS ONE, Public Library of Science, vol. 9(12), pages 1-32, December.
    3. Aleya Felicia Flechsenhar & Matthias Gamer, 2017. "Top-down influence on gaze patterns in the presence of social features," PLOS ONE, Public Library of Science, vol. 12(8), pages 1-20, August.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:389:y:1997:i:6651:d:10.1038_39309. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.