Author
Listed:
- Jing Cai
(Harvard Medical School)
- Alex E. Hadjinicolaou
(Harvard Medical School)
- Angelique C. Paulk
(Harvard Medical School
Massachusetts General Hospital)
- Daniel J. Soper
(Harvard Medical School)
- Tian Xia
(Harvard Medical School)
- Alexander F. Wang
(Harvard Medical School)
- John D. Rolston
(Harvard Medical School)
- R. Mark Richardson
(Harvard Medical School)
- Ziv M. Williams
(Harvard Medical School
Program in Neuroscience
Harvard-MIT Division of Health Sciences and Technology)
- Sydney S. Cash
(Harvard Medical School
Massachusetts General Hospital
Harvard-MIT Division of Health Sciences and Technology)
Abstract
Through conversation, humans engage in a complex process of alternating speech production and comprehension to communicate. The neural mechanisms that underlie these complementary processes through which information is precisely conveyed by language, however, remain poorly understood. Here, we used pre-trained deep learning natural language processing models in combination with intracranial neuronal recordings to discover neural signals that reliably reflected speech production, comprehension, and their transitions during natural conversation between individuals. Our findings indicate that the neural activities that reflected speech production and comprehension were broadly distributed throughout frontotemporal areas across multiple frequency bands. We also find that these activities were specific to the words and sentences being conveyed and that they were dependent on the word’s specific context and order. Finally, we demonstrate that these neural patterns partially overlapped during language production and comprehension and that listener-speaker transitions were associated with specific, time-aligned changes in neural activity. Collectively, our findings reveal a dynamical organization of neural activities that subserve language production and comprehension during natural conversation and harness the use of deep learning models in understanding the neural mechanisms underlying human language.
Suggested Citation
Jing Cai & Alex E. Hadjinicolaou & Angelique C. Paulk & Daniel J. Soper & Tian Xia & Alexander F. Wang & John D. Rolston & R. Mark Richardson & Ziv M. Williams & Sydney S. Cash, 2025.
"Natural language processing models reveal neural dynamics of human conversation,"
Nature Communications, Nature, vol. 16(1), pages 1-13, December.
Handle:
RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58620-w
DOI: 10.1038/s41467-025-58620-w
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-58620-w. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.