The Neural Mechanisms of Speech Production: From computational modeling to neural prosthesis
|Frank Guenther (Boston University)||Frank Guenther|
Speech production is a highly complex sensorimotor task involving tightly coordinated processing in the frontal, temporal, and parietal lobes of the cerebral cortex. To better understand these processes, our laboratory has designed, experimentally tested, and iteratively refined a neural network model whose components correspond to the brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After learning, the model can produce syllables and words it has learned by generating movements of an articulatory synthesizer. Because the model's components correspond to neural populations and are given precise anatomical locations, activity in the model's cells can be compared directly to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during speech. Furthermore, "damaged" versions of the model are being used to investigate several communication disorders, including stuttering, apraxia of speech, and spasmodic dysphonia. Finally, the model was used to guide development of a neural prosthesis aimed at restoring speech output to a profoundly paralyzed individual with an electrode permanently implanted in his speech motor cortex. The volunteer maintained a 70% hit rate after 5-10 practice attempts of each vowel in a vowel production task, supporting the feasibility of brain-machine interfaces with the potential to restore conversational speech abilities to the profoundly paralyzed.