Florencia Assaneo

Post-Doctoral Fellow

Speech production, like other motor systems, activates a previously learned motor command in order to produce a desired output. But the goal and the score of the output present some particularities. Think, for example, about a visuomotor task where you are trying to reach an object with your hand. There, your actions are driven by an external target: “ the object position”. When you speak, you don’t have such an external target, instead your target is an internal representation of the desired speech sound. Several works point into the direction that, while speaking, a feedforward motor command is executed and its output is compared with an internal representation of the auditory and somatosensory sequence that corresponds to the intended speech sound. This feedback loop allows the online correction of possible mismatches between the intended and the ongoing speech. My inquiries are about the features that define the internal representation of the speech sounds, and how they are controlled by the feedback loop. My experimental approach implements different types of manipulations to the auditory feedback, exploring how the output is modified. I complement these psychophysical experiments with fMRI or MEG, in order to improve the description of the neural network responsible of speech production.




  1. Tian, X. & Poeppel, D. (2014) Dynamics of self-monitoring and error detection in speech production: evidence from mental imagery and MEG. Journal of Cognitive Neuroscience, in press.
  2. Niziolek1 & Guenther (2013) Vowel Category Boundaries Enhance Cortical and Behavioral Responses to Speech Feedback Alterations. The Journal of Neuroscience, 33(29):12090 –12098
  3. Hickock, Houde & Rong (2011) Sensorimotor Integration in Speech Processing: Computational Basis and Neural Organization, Neuron 69