Keynote Speakers

  • Yejin Choi,  University of Washington (joint invited speaker with SemEval 2017)

BIO: Yejin Choi is an assistant professor at the Computer Science & Engineering Department of University of Washington. Her recent research focuses on integrating language and vision, learning knowledge about the world from text and images, modeling richer context for natural language generation, and modeling nonliteral meaning of text using connotation frames. She was among the IEEE’s AI Top 10 to Watch in 2015 and a co-recipient of the Marr Prize at ICCV 2013. Her work on detecting deceptive reviews, predicting the literary success, and learning to interpret connotation has been featured by numerous media outlets including NBC News for New York, NPR Radio, New York Times, and Bloomberg Business Week. She received her Ph.D. in Computer Science at Cornell University.

Title: From Naive Physics to Connotation: Modeling Commonsense in Frame Semantics

Abstract: Intelligent communication requires reading between the lines, which in turn, requires rich background knowledge about how the world works. However, learning unspoken commonsense knowledge from language is nontrivial, as people rarely state the obvious, e.g., ``my house is bigger than me.’’ In this talk, I will discuss how we can recover the trivial everyday knowledge just from language without an embodied agent. A key insight is this: the implicit knowledge people share and assume systematically influences the way people use language, which provides indirect clues to reason about the world. For example, if ``Jen entered her house’’, it must be that her house is bigger than her. I will discuss how we can model a variety of aspects of knowledge — ranging from naive physics to connotation — adapting representation of frame semantics.

BIO: Katrin Erk is an associate professor in the Department of Linguistics at the University of Texas at Austin. Her research expertise is in the area of computational linguistics, especially semantics. Her work is on distributed, flexible approaches to describing word meaning, and on combining them with logic-based representations of sentences and other larger structures. At the word level, she is studying flexible representations of meaning in context, independent of word sense lists. At the sentence level, she is looking into probabilistic frameworks that can draw weighted inferences from combined logical and distributed representations. Katrin Erk completed her dissertation on tree description languages and ellipsis at Saarland University in 2002, under the supervision of Gert Smolka and Manfred Pinkal. From 2002 to 2006, she held a researcher position in the Salsa project at Saarland University, working on manual and automatic frame-semantic analysis. 

Title: What do you know about an alligator when you know the company it keeps?

Abstract: How can people learn about the meaning of a word from textual context? If we assume that lexical knowledge has to do with truth conditions, then what can textual (distributional) information contribute? 
I will argue that at the least, an agent can observe how textual contexts co-occur with concepts that have particular properties, and that the agent can use this information to make inferences about unknown words: "I don't know what an alligator is, but it must be something like a crocodile". I will further argue that this inference can only be noisy and partial, and is best described in probabilistic terms.