NLP for Conversational AI
ACL 2020 Workshop in Seattle, USA
For as long as artificial intelligence has been a goal of mankind, mathematicians, linguists, and computer scientists have dedicated their careers to empowering human-machine communication in natural language. While in recent years, the emergence of virtual personal assistants such as Siri, Alexa, Google Assistant, and Cortana has greatly pushed the field forward, the development of these conversational agents remains difficult with numerous unanswered questions.
Following the huge success of the 1st NLP for Conversational AI workshop at ACL 2019 with 400+ attendees, the 2nd NLP for Conversational AI workshop will be a one-day event consisting of keynotes, posters, and panel sessions. The goal of this workshop is to bring together NLP researchers and practitioners in different fields alongside experts in speech and machine learning, to discuss the state-of-the-art in conversational artificial intelligence, to share insights and challenges, to bridge the gap between academic research and real-world product deployment, and to shed light on where the field is going.
Sound exciting? We are looking forward to seeing you in Seattle!
- Paper Submission: April 17, 2020 (23:59 PST)
- Notification of Acceptance: May 29, 2020
- Camera-ready Paper Due: June 12, 2020
- Workshop Date: July 9, 2020
Keynote: Robustness and Scalability for Conversational AI
Even conversational systems have attracted a lot of attention recently, the current systems sometimes fail due to the errors from different components. This talk presents potential directions for improvement: 1) we first focus on learning language embeddings specifically for practical scenarios for better robustness, and 2) secondly we propose a novel learning framework for natural language understanding and generation on top of duality for better scalability. Both directions enhance the robustness and scalability of conversational systems, showing the potential of guiding future research areas.
Keynote: Language Grounding with Robots
We use language to refer to objects like “toast”, “plate”, and “table” and to communicate requests such as “Could you make breakfast?” In this talk, I will present work on computational methods to tie language to physical, grounded meaning. Robots are an ideal platform for such work because they can perceive and interact with the world. I will discuss dialog and learning strategies I have developed to enable robots to learn from their human partners, similar to how people learn from one another through interaction. I will present methods enabling robots to understand language referring expressions like “the heavy, metallic mug”, the first work showing that it is possible to learn to connect words to their perceptual properties in the visual, tactile, and auditory senses of a physical robot. I will also present a benchmark of human-human dialogs for cooperative navigation, as well as models for both navigation and end-to-end, agent-agent dialog execution trained on that benchmark.