Atrás Eventos pasados: Joao Correia, PhD. Investigating the neural code of speech production using high-resolution fMRI

Joao Correia, PhD. Investigating the neural code of speech production using high-resolution fMRI

3/11/2016
- BCBL Auditorium
What: Investigating the neural code of speech production using high-resolution fMRI

 

Where: BCBL Auditorium

 

Who: Joao Correia, PhD, Postdoctoral Researcher, Maastricht University, the Netherlands.

 

When: 12 noon

Effective verbal communication requires a neural system able to translate the intention to speak into a motor program that orchestrates multiple articulatory gestures. In addition to articulation, normal speaking requires online monitoring of speech performance, for example via internal and external auditory and proprioceptive feedback (Hickok, Houde, & Rong, 2011). This sensorimotor complicity is present in fluent speech and includes a ‘parity’ between articulation and sensory brain systems (Cogan et al., 2014; Hickok & Poeppel, 2007; Pugh et al., 2001). Unravelling the neural representations of speech production using non-invasive fMRI recordings is challenging but crucial to understand the neural processes for human communication. This knowledge is critical for developing more realistic neurobiological and computational models of speech and language and subsequently to allow more informed investigations of developmental and acquired dysfunctional communication capabilities. Recent advances in functional magnetic resonance imaging (fMRI) permit investigating the human speaking brain with unprecedented detail and specificity (Correia, Jansma, & Bonte, 2015; Evans & Davis, 2015). Specifically, a novel prospective motion correction (PMC) (Maclaren, Herbst, Speck, & Zaitsev, 2013) technology that adjusts the imaging pulse sequences using real-time tracking of head movements permits to study speech production with increased detail (e.g., 1 mm isotropic functional voxel size). Imaging fMRI activations during overt speaking without such a correction leads to sub-optimal signal quality due to contamination from motion artefacts. In combination with multivariate classification strategies (e.g., (Kriegeskorte, Mur, & Bandettini, 2008; Santoro et al., 2014)) that exploit the distributed nature of brain responses and neural representations, these advances provide a unique opportunity to research the neural code of speech production in ecologically-valid settings. Here, we investigate how speech is produced, how primary intentions to produce individual items (i.e., words) are translated onto specific articulatory programs as well as neural and sensorial representations related with the consequences of speaking. Using multivariate decoding and encoding methods able to unravel fMRI response patterns specific to speech items we look into  the function of different brain representations during word production. Specifically, representations related to speech planning and articulatory execution, as well as, representations related to neural and sensory consequences of speaking – somatosensory, and auditory representations prior to and post overt production (Dick, Bernal, & Tremblay, 2014; Hickok, 2012).