Author
Listed:
- Matthew K. Leonard
(University of California, San Francisco
University of California, San Francisco)
- Laura Gwilliams
(University of California, San Francisco
University of California, San Francisco)
- Kristin K. Sellers
(University of California, San Francisco
University of California, San Francisco)
- Jason E. Chung
(University of California, San Francisco
University of California, San Francisco)
- Duo Xu
(University of California, San Francisco
University of California, San Francisco)
- Gavin Mischler
(Columbia University
Columbia University)
- Nima Mesgarani
(Columbia University
Columbia University)
- Marleen Welkenhuysen
(IMEC)
- Barundeb Dutta
(IMEC)
- Edward F. Chang
(University of California, San Francisco
University of California, San Francisco)
Abstract
Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1–3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.
Suggested Citation
Matthew K. Leonard & Laura Gwilliams & Kristin K. Sellers & Jason E. Chung & Duo Xu & Gavin Mischler & Nima Mesgarani & Marleen Welkenhuysen & Barundeb Dutta & Edward F. Chang, 2024.
"Large-scale single-neuron speech sound encoding across the depth of human cortex,"
Nature, Nature, vol. 626(7999), pages 593-602, February.
Handle:
RePEc:nat:nature:v:626:y:2024:i:7999:d:10.1038_s41586-023-06839-2
DOI: 10.1038/s41586-023-06839-2
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:626:y:2024:i:7999:d:10.1038_s41586-023-06839-2. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.