Neural Representations of Language Meaning
Tom M. Mitchell, Ph.D.
E. Fredkin University Professor
Machine Learning Department
Carnegie Mellon University
How does the human brain use neural activity to create and represent meanings of words, sentences and stories? One way to study this question is to have people read text while scanning their brain, then develop machine learning methods to discover the mapping between language features and observed neural activity. We have been doing such experiments with fMRI (1 mm spatial resolution) and MEG (1 msec time resolution) brain imaging for over a decade. As a result, we have learned answers to questions such as “Are the neural encodings of word meaning the same in your brain and mine?”, “Are neural encodings of word meaning built out of recognizable subcomponents, or are they randomly different for each word?,” and “What sequence of neurally encoded information flows through the brain during the half-second in which the brain comprehends a single word, or when it comprehends a multi-word sentence, or a story?” This talk will summarize some of what we have learned, newer questions we are currently working on, and will describe the central role that machine learning algorithms play in this research.