Invited Speakers
Georgia Institute of Technology
Title: Seven Social Factors in Natural Language Processing: Theory and Practice
Abstract: Recently, natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals. In this talk, we take a closer look at social factors in language via a new theory taxonomy, and its interplay with computational methods via two lines of work. The first one studies what makes language persuasive by introducing a semi-supervised method to leverage hierarchical structures in text to recognize persuasion strategies in good-faith requests. The second part demonstrates how various structures in conversations can be utilized to generate better summaries for everyday interaction. We conclude by discussing several open-ended questions towards how to build socially aware language technologies, with the hope of getting closer to the goal of human-like language understanding.
Bio: Diyi Yang is an assistant professor in the School of Interactive Computing at Georgia Tech. She is broadly interested in Computational Social Science, and Natural Language Processing. Diyi received her PhD from the Language Technologies Institute at Carnegie Mellon University. Her work has been published at leading NLP/HCI conferences, and also resulted in multiple award nominations from EMNLP, ICWSM, SIGCHI and CSCW. She is named as a Forbes 30 under 30 in Science, a recipient of IEEE AI 10 to Watch, and has received faculty research awards from Amazon, Facebook, JPMorgan Chase, and Salesforce.
DeepMind
Title: Why Do Embodied Language Learning?
Abstract: In this talk, I’ll give some good reasons to study language learning and processing in the context of an embodied or situated agent. Learning in an embodied context is fundamentally different from other ML settings. Working out how to perceive and move in addition to understanding and using language can be a substantial additional burden for the learner. However, I will show that it can also bring important benefits. The embodied learner sees the world from an egocentric perspective, is necessarily located at a specific place at a given time, exerts some control over the learning data it encounters, and confronts face-on the relationship between language and the physical world. These factors place strong constraints on the learner’s experience, which can in turn lead to more human-like learning outcomes. Our findings suggest that embodied learning may play an important role in convincingly replicating human linguistic intuitions and behaviours in a machine.
Bio: Felix Hill is a Research Scientist at DeepMind, and leads a team focusing on grounded language learning and processing. He has a Masters degree in pure mathematics from the University of Oxford, and a Masters in Psycholinguistics and PhD in Computer Science from the University of Cambridge. His graduate studies focused on representation-learning in neural network models of language, on which he worked with many great collaborators including Ivan Vulic ́, Douwe Kiela, Yoshua Bengio, Kyunghyun Cho and Jason Weston. At DeepMind, he has focused on developing better learning, meta-learning, reasoning, memory systems and generalization in agents that explore and interact with simulated environments.