IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v15y2024i1d10.1038_s41467-024-49040-3.html
   My bibliography  Save this article

Spectro-temporal acoustical markers differentiate speech from song across cultures

Author

Listed:
  • Philippe Albouy

    (Laval University
    Music and Sound Research (BRAMS)
    Language and Music and Centre for Interdisciplinary Research in Music, Media, and Technology)

  • Samuel A. Mehr

    (Music and Sound Research (BRAMS)
    University of Auckland
    Yale University)

  • Roxane S. Hoyer

    (Laval University)

  • Jérémie Ginzburg

    (Laval University
    CNRS, UMR5292, INSERM, U1028 - Université Claude Bernard Lyon 1
    Montreal Neurological Institute, McGill University)

  • Yi Du

    (Chinese Academy of Sciences)

  • Robert J. Zatorre

    (Music and Sound Research (BRAMS)
    Language and Music and Centre for Interdisciplinary Research in Music, Media, and Technology
    Montreal Neurological Institute, McGill University)

Abstract

Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation—a key feature of auditory neuronal tuning—accounts for a fundamental difference between these categories.

Suggested Citation

  • Philippe Albouy & Samuel A. Mehr & Roxane S. Hoyer & Jérémie Ginzburg & Yi Du & Robert J. Zatorre, 2024. "Spectro-temporal acoustical markers differentiate speech from song across cultures," Nature Communications, Nature, vol. 15(1), pages 1-13, December.
  • Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-49040-3
    DOI: 10.1038/s41467-024-49040-3
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-024-49040-3
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-024-49040-3?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Courtney B. Hilton & Cody J. Moser & Mila Bertolo & Harry Lee-Rubin & Dorsa Amir & Constance M. Bainbridge & Jan Simson & Dean Knox & Luke Glowacki & Elias Alemu & Andrzej Galbarczyk & Grazyna Jasiens, 2022. "Acoustic regularities in infant-directed speech and song across cultures," Nature Human Behaviour, Nature, vol. 6(11), pages 1545-1556, November.
    2. Adeen Flinker & Werner K. Doyle & Ashesh D. Mehta & Orrin Devinsky & David Poeppel, 2019. "Spectrotemporal modulation provides a unifying framework for auditory cortical asymmetries," Nature Human Behaviour, Nature, vol. 3(4), pages 393-405, April.
    3. Evan C. Smith & Michael S. Lewicki, 2006. "Efficient auditory coding," Nature, Nature, vol. 439(7079), pages 978-982, February.
    4. Robert J Zatorre & Shari R Baum, 2012. "Musical Melody and Speech Intonation: Singing a Different Tune?," Working Papers id:5079, eSocialSciences.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Sam Passmore & Anna L. C. Wood & Chiara Barbieri & Dor Shilton & Hideo Daikoku & Quentin D. Atkinson & Patrick E. Savage, 2024. "Global musical diversity is largely independent of linguistic and genetic histories," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    2. Jonathan J Hunt & Peter Dayan & Geoffrey J Goodhill, 2013. "Sparse Coding Can Predict Primary Visual Cortex Receptive Field Changes Induced by Abnormal Visual Input," PLOS Computational Biology, Public Library of Science, vol. 9(5), pages 1-17, May.
    3. Lubomir Kostal & Petr Lansky & Jean-Pierre Rospars, 2008. "Efficient Olfactory Coding in the Pheromone Receptor Neuron of a Moth," PLOS Computational Biology, Public Library of Science, vol. 4(4), pages 1-11, April.
    4. Baishen Liang & Yanchang Li & Wanying Zhao & Yi Du, 2023. "Bilateral human laryngeal motor cortex in perceptual decision of lexical tone and voicing of consonant," Nature Communications, Nature, vol. 14(1), pages 1-15, December.
    5. Lingyun Zhao & Li Zhaoping, 2011. "Understanding Auditory Spectro-Temporal Receptive Fields and Their Changes with Input Statistics by Efficient Coding Principles," PLOS Computational Biology, Public Library of Science, vol. 7(8), pages 1-16, August.
    6. Nori Jacoby & Rainer Polak & Jessica A. Grahn & Daniel J. Cameron & Kyung Myun Lee & Ricardo Godoy & Eduardo A. Undurraga & Tomás Huanca & Timon Thalwitzer & Noumouké Doumbia & Daniel Goldberg & Eliza, 2024. "Commonality and variation in mental representations of music revealed by a cross-cultural comparison of rhythm priors in 15 countries," Nature Human Behaviour, Nature, vol. 8(5), pages 846-877, May.
    7. Oded Barzelay & Miriam Furst & Omri Barak, 2017. "A New Approach to Model Pitch Perception Using Sparse Coding," PLOS Computational Biology, Public Library of Science, vol. 13(1), pages 1-36, January.
    8. Joseph D. Zak & Gautam Reddy & Vaibhav Konanur & Venkatesh N. Murthy, 2024. "Distinct information conveyed to the olfactory bulb by feedforward input from the nose and feedback from the cortex," Nature Communications, Nature, vol. 15(1), pages 1-16, December.
    9. Noga Mosheiff & Haggai Agmon & Avraham Moriel & Yoram Burak, 2017. "An efficient coding theory for a dynamic trajectory predicts non-uniform allocation of entorhinal grid cells to modules," PLOS Computational Biology, Public Library of Science, vol. 13(6), pages 1-19, June.
    10. Sam V Norman-Haignere & Josh H McDermott, 2018. "Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex," PLOS Biology, Public Library of Science, vol. 16(12), pages 1-46, December.
    11. David Welch & Mark Reybrouck & Piotr Podlipniak, 2022. "Meaning in Music Is Intentional, but in Soundscape It Is Not—A Naturalistic Approach to the Qualia of Sounds," IJERPH, MDPI, vol. 20(1), pages 1-18, December.
    12. Jacob N Oppenheim & Pavel Isakov & Marcelo O Magnasco, 2013. "Degraded Time-Frequency Acuity to Time-Reversed Notes," PLOS ONE, Public Library of Science, vol. 8(6), pages 1-6, June.
    13. Jonathan Schaffner & Sherry Dongqi Bao & Philippe N. Tobler & Todd A. Hare & Rafael Polania, 2023. "Sensory perception relies on fitness-maximizing codes," Nature Human Behaviour, Nature, vol. 7(7), pages 1135-1151, July.
    14. Gonzalo H Otazu & Christian Leibold, 2011. "A Corticothalamic Circuit Model for Sound Identification in Complex Scenes," PLOS ONE, Public Library of Science, vol. 6(9), pages 1-15, September.
    15. Tomas Barta & Lubomir Kostal, 2019. "The effect of inhibition on rate code efficiency indicators," PLOS Computational Biology, Public Library of Science, vol. 15(12), pages 1-21, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-49040-3. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.