What: Phonetic plasticity at multiple timescales
Where: Zoom room 2
Who: Emily Myers, PhD. Professor, Department of Speech, Language, and Hearing Sciences; Department of Psychological Sciences; Co-director: Cognitive Neuroscience of Communication-Connecticut Training Program; Research Scientist, Haskins Laboratory; University of Connecticut, US.
When: Thursday, March 10th at 1:30 PM.
Learning to perceive the sounds of a new language in adulthood is hard, leading to the proposal that there is a critical or sensitive period for speech sound learning. Yet listening to speech of an accented or unfamiliar talker also requires perceptual flexibility, but seems relatively easy. In this talk, I describe how the brain constantly learns and adapts to phonetic variability in spoken language comprehension, a process we refer to as “phonetic plasticity.” The overarching goal of this work is to understand the mechanisms that underlie phonetic plasticity in native and non-native language processing, looking for commonalities among and differences between these systems. Evidence from our lab suggests that training on non-native sounds produces plastic effects in the brain regions involved in native-language perception, and that consolidation during sleep plays a large role in the degree to which training is maintained and generalizes to new talkers. Further, similar mechanisms may be at play when listeners learn to perceive non-standard tokens in the context of accented speech in one’s native language, and in perceptual adaptation to non-standard tokens. Taken together, these findings suggest that speech perception is more plastic than critical period accounts would predict and that individual variability in brain function, brain structure and sleep behavior may account for differences in not only in L2 learning, but also in the way listeners adapt to the speech signal in their native language.