PerspectiveNeuroscience

Splitting speech and music

See allHide authors and affiliations

Science  28 Feb 2020:
Vol. 367, Issue 6481, pp. 974-976
DOI: 10.1126/science.aba7913

You are currently viewing the summary.

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

Summary

Speech and music are human universals, and people around the world often blend them together into vocal songs (1). This entwinement of the speech and music cognitive domains is a challenge for the auditory cognitive system. How do listeners extract words and melodies from a single sound wave? The split is surmised to start in the signal: Speech and musical sounds are thought to differ in details of their acoustic structure and thus activate different receptive preferences of the left and right auditory cortices of the brain (2, 3). On page 1043 of this issue, Albouy et al. (4) provide evidence for the biophysical basis of the long-debated, yet still unresolved, hemispheric asymmetry of speech and music perception in humans. They show that the left and right auditory regions of the brain contribute differently to the decoding of words and melodies in songs.

View Full Text

Stay Connected to Science