I Hear What You're Saying

See allHide authors and affiliations

Science  12 Jun 2009:
Vol. 324, Issue 5933, pp. 1367
DOI: 10.1126/science.324_1367c

The widespread adoption of multiple technologies for distinct channels of data communication—text, voice, and video—has made it abundantly clear to even the casual user that more bandwidth allows for higher rates of information transfer. But what happens on the receiving end? Presumably, recipients of phone calls are processing a lot more information, such as emotional overtones, than just the words that are spoken. Does this emotional content register in their brains? Ethofer et al. apply the method of multivariate pattern analysis and show that pseudowords spoken with five distinct emotional melodies (anger, sadness, relief, joy, or neutrality) do evoke recognizable neural responses within the auditory cortex. Each of these emotions could be discriminated against the others, and decoding algorithms trained on any nine of the speakers' voices were accurate in classifying the emotional identity of the tenth speaker's speech. Furthermore, the five distributed maps of neuronal activity segregated more closely to levels of arousal than valence, suggesting a possible affective organization within the auditory cortex.

Curr. Biol. 19, 10.1016/j.cub.2009.04.054 (2009).

Navigate This Article