Report

Phonetic Feature Encoding in Human Superior Temporal Gyrus

Science  28 Feb 2014:
Vol. 343, Issue 6174, pp. 1006-1010
DOI: 10.1126/science.1245994

You are currently viewing the abstract.

View Full Text
As a service to the community, AAAS/Science has made this article free with registration.

Deciphering Language

Consonants and vowels represent basic building blocks of human language. How their characteristics are extracted from acoustic speech input is not well understood. Directly recording from the superior temporal gyrus of patients as part of their clinical evaluation for epilepsy surgery, Mesgarani et al. (p. 1006, published online 30 January; see the Perspective by Grodzinsky and Nelken) investigated neural responses while the subjects listened to continuous speech. The findings reveal how both vowels and consonants of different phonetic categories are encoded.

Abstract

During speech perception, linguistic elements such as consonants and vowels are extracted from a complex acoustic speech signal. The superior temporal gyrus (STG) participates in high-order auditory processing of speech, but how it encodes phonetic information is poorly understood. We used high-density direct cortical surface recordings in humans while they listened to natural, continuous speech to reveal the STG representation of the entire English phonetic inventory. At single electrodes, we found response selectivity to distinct phonetic features. Encoding of acoustic properties was mediated by a distributed population response. Phonetic features could be directly related to tuning for spectrotemporal acoustic cues, some of which were encoded in a nonlinear fashion or by integration of multiple cues. These findings demonstrate the acoustic-phonetic representation of speech in human STG.

  • Corresponding author. E-mail: changed{at}neurosurg.ucsf.edu

View Full Text