Keynote Speakers

Seamless Natural Communication between Humans and Machines

Abstract

Session Chair: Alexander Rudnicky (CMU, USA)


Dialog systems such as Alexa and Siri are everywhere in our lives. They can complete tasks such as booking flights, making restaurant reservations and training people for interviews. However, currently deployed dialog systems are rule-based and cannot generalize to different domains, let alone flexible dialog context tracking.

We will first discuss how to design studies to collect realistic dialogs through a crowdsourcing platform. Then we introduce a dialog model that utilizes limited data to achieve good performance by leveraging multi-task learning and semantic scaffolds. We further improve the model's coherence by tracking both semantic actions and conversational strategies from dialog history using finite-state transducers.

Finally, we analyze some ethical concerns and human factors in dialog system deployment. All our work comes together to build seamless natural communication between humans and machines.


Short Bio

Zhou Yu is an Assistant Professor at the UC Davis Computer Science Department. Zhou will join the CS department at Columbia University in Jan 2021 as an Assistant Professor. She obtained her Ph.D. from Carnegie Mellon University in 2017. Zhou has built various dialog systems that have a real impact, such as a job interview training system, a depression screening system, and a second language learning system. Her research interest includes dialog systems, language understanding and generation, vision and language, human-computer interaction, and social robots. Zhou received an ACL 2019 best paper nomination, featured in Forbes 2018 30 under 30 in Science, and won the 2018 Amazon Alexa Prize.


Slides (Click Here)

FINDING NEMD

Abstract

Session Chair: Kristiina Jokinen (AIST, Japan)


The recent proliferation of conversational AI creatures is still superficially navigating on shallow waters with regards to language understanding and generation. Accordingly, these new types of creatures are failing to properly dive in the deep oceans of human-like usage of language and intelligence. FINDING NEMD (New Evaluation Metrics for Dialogue) is an epic journey across the seas of data and data-driven applications to tame its conversational AI creatures for the benefit of science and humankind.


Short Bio

Rafael is a Senior Research Scientist at Intapp Inc. His research focuses on applying NLP technologies to problems in the professional services industry. He is also Adjunct Associate Professor at Nanyang Technological University (NTU) in Singapore, where he supervises student projects in question answering and conversational agent related applications. He has previous experience organizing workshop at ACL and other International Conferences, including workshop series in Named Entities (NEWS), Conversational Agents (WOCHAT) and Machine Translation (HyTra).


Slides (Click Here)

Better dialogue generation!

Abstract

Session Chair: Haizhou Li (National University of Singapore, Singapore)


Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models

Short Bio

Jason Weston is a research scientist at Facebook, NY and a Visiting Research Professor at NYU. He earned his PhD in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ (advisors: Alex Gammerman, Volodya Vovk and Vladimir Vapnik) in 2000. Since then, he has worked at Biowulf technologies, the Max Planck Institute for Biological Cybernetics, and Google Research. Jason has published over 100 papers, including best paper awards at ICML and ECML, and a Test of Time Award for his work "A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning", ICML 2008 (with Ronan Collobert). He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. He was listed as the 16th most influential machine learning scholar at AMiner and one of the top 50 authors in Computer Science in Science.


Slides: (Click Here)