IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v8y2017i1d10.1038_ncomms15037.html
   My bibliography  Save this article

Generic decoding of seen and imagined objects using hierarchical visual features

Author

Listed:
  • Tomoyasu Horikawa

    (ATR Computational Neuroscience Laboratories)

  • Yukiyasu Kamitani

    (ATR Computational Neuroscience Laboratories
    Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan)

Abstract

Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

Suggested Citation

  • Tomoyasu Horikawa & Yukiyasu Kamitani, 2017. "Generic decoding of seen and imagined objects using hierarchical visual features," Nature Communications, Nature, vol. 8(1), pages 1-15, August.
  • Handle: RePEc:nat:natcom:v:8:y:2017:i:1:d:10.1038_ncomms15037
    DOI: 10.1038/ncomms15037
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/ncomms15037
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/ncomms15037?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Guohua Shen & Tomoyasu Horikawa & Kei Majima & Yukiyasu Kamitani, 2019. "Deep image reconstruction from human brain activity," PLOS Computational Biology, Public Library of Science, vol. 15(1), pages 1-23, January.
    2. Benjamin Lahner & Kshitij Dwivedi & Polina Iamshchinina & Monika Graumann & Alex Lascelles & Gemma Roig & Alessandro Thomas Gifford & Bowen Pan & SouYoung Jin & N. Apurva Ratan Murty & Kendrick Kay & , 2024. "Modeling short visual events through the BOLD moments video fMRI dataset and metadata," Nature Communications, Nature, vol. 15(1), pages 1-26, December.
    3. Serra E. Favila & Brice A. Kuhl & Jonathan Winawer, 2022. "Perception and memory have distinct spatial tuning properties in human visual cortex," Nature Communications, Nature, vol. 13(1), pages 1-21, December.
    4. Hojin Jang & Frank Tong, 2024. "Improved modeling of human vision by incorporating robustness to blur in convolutional neural networks," Nature Communications, Nature, vol. 15(1), pages 1-14, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:8:y:2017:i:1:d:10.1038_ncomms15037. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.