This talk presents an approach to cognitive modeling using neuroimaging data (fMRI) as a way to adjudicate between more human-like and less human-like models. Using naturalistic narratives as spoken stimuli, we can apply familiar tools such as parsing, language modeling and multi-word expressions to the task of understanding human language comprehension in real time. Via data sharing, anyone with an idea about how comprehension might work can try their hand at modeling the brain.
John Hale will be joining University of Georgia's Linguistics Department in Fall 2018. Before that he was a full-time research scientist at Deepmind. His work centers on language comprehension answering questions like: how are we able to understand one another, just by hearing a sequence of words, one by one? He has sought to answer this question through cognitive modeling -- analyzing the human mind via computer simulation. John has a PhD in Cognitive Science from Johns Hopkins University (2003) and is the author of the book Automaton Theories of Human Sentence Comprehension by CSLI Publications, 2014. His recent work Finding Syntax in Human Encephalography with Beam Search (with Chris Dyer, Adhiguna Kuncoro and Jonathan R. Brennan) won the best paper award at the 56th Annual Meeting of the Association for Computational Linguistics.