NLP for Conversational AI

Co-located with ACL 2023 @ Toronto, Canada

Over the past decades, mathematicians, linguists, and computer scientists have dedicated their efforts towards empowering human-machine communication in natural language. While in recent years the emergence of virtual personal assistants such as Siri, Alexa, Google Assistant, Cortana, and ChatGPT has pushed the field forward, they may still have numerous challenges.

Following the success of the 4th NLP for Conversational AI workshop at ACL, "The 5th NLP4ConvAI” will be a one-day workshop, co-located with ACL 2023 in Toronto, Canada. The goal of this workshop is to bring together researchers and practitioners to discuss impactful research problems in this area, share findings from real-world applications, and generate ideas for future research directions.

The workshop will include keynotes, posters, panel sessions, and a shared task. In keynote talks, senior technical leaders from industry and academia will share insights on the latest developments in the field. We would like to encourage researchers and students to share their prospects and latest discoveries. There will also be a panel discussion with noted conversational AI leaders focused on the state of the field, future directions, and open problems across academia and industry.

Find conversational AI exciting? We are looking forward to seeing you! 


Previous Workshops

Important Dates

Invited Speakers

Larry Heck, Georgia Institute of Technology

Keynote: Build it for One @ Right Place Right Time: Leveraging Context in Conversational Systems

Recent years have seen significant advances in conversational systems, particularly with the advent of attention-based language models pre-trained on large datasets of unlabeled natural language text. While the breadth of the models has led to fluid and coherent dialogues over a broad range of topics, they can make mistakes when high precision is required. High precision is not only required when specialized skills are involved (legal/medical/tax advice, computations, etc.), but also to avoid seemingly trivial mistakes such as commonsense and other relevant ‘in-the-moment’ context. Much of this context centers on and should be derived from the user’s perspective. This talk will explore prior and current work on leveraging this user-centric context (build it for one) and the user’s specific situation (right place right time) to improve the accuracy and utility of conversational systems.

Diyi Yang, Stanford University

Keynote: Inclusive Conversational AI for Positive Impact

Conversational AI has revolutionized the way we interact with technology, holding the potential to create positive impact on a variety of domains. In this talk, we present two studies that develops inclusive conversational AI techniques to empower users in different contexts for social impact.  The first one looks at linguistic prejudice with a participatory design approach to develop dialect-inclusive language tools for low-resourced dialects in conversational question answering, together with efficient adaptation of models trained on Standard American English (SAE) to different dialects.  The second work introduces CARE, an interactive conversational agent that supports peer counselors by generating personalized suggestions. CARE diagnoses suitable counseling strategies and provides tailored example responses during training, empowering counselors to respond effectively. These works showcase the potential of how inclusive language technologies can address language and communication barriers and foster positive impact.

Vipul Raheja, Grammarly

Keynote: Building Better Writing Assistants In the Era of Conversational LLMs

Text revision is a complex, iterative process. It is no surprise that human writers are unable to simultaneously comprehend multiple demands and constraints of the task of text revision when producing well-written texts, as they are required to cover the content, follow linguistic norms, set the right tone, follow discourse conventions, etc. This presents a massive challenge and opportunity for intelligent writing assistants, which have undergone an enormous shift in their abilities in the past few years and months via large language models. In addition to the quality of editing suggestions, writing assistance has undergone a monumental shift in terms of being a one-sided, push-based paradigm, to now being a natural language-based, conversational exchange of input and feedback. However, writing assistants still lack in terms of their quality, personalization, and overall usability, limiting the value they provide to users. In this talk, I will present my research, challenges, and insights on building intelligent and interactive writing assistants for effective communication, navigating challenges pertaining to quality, personalization, and usability. 

Nurul Lubis, Heinrich Heine University Düsseldorf

Keynote: Dialogue Evaluation via Offline Reinforcement Learning and Emotion Prediction

Task-oriented dialogue systems aim to fulfill user goals, such as booking hotels or searching for restaurants, through natural language interactions. They are ideally evaluated through interaction with human users. However, this is unattainable to do at every iteration of the development phase due to time and financial constraints. Therefore, researchers resort to static evaluation on dialogue corpora. Although they are more practical and easily reproducible, they do not fully reflect real performance of dialogue systems. Can we devise an evaluation that keeps the best of both worlds? In this talk I explore the usage of offline reinforcement learning and emotion prediction for dialogue evaluation that is practical, reliable, and strongly correlated with human judgements.

Jason Weston, Meta AI

Keynote: Improving Open Language Models by Learning from Organic Interactions

We discuss techniques that can be used to learn how to improve AIs (dialogue models) by interacting with organic users ``in the wild''. Training models with organic data is challenging because interactions with people "in the wild" include both high quality conversations and feedback, as well as adversarial and toxic behavior. We thus study techniques that enable learning from helpful teachers while avoiding learning from people who are trying to trick the model into unhelpful or toxic responses. We present BlenderBot 3x, an update on the conversational model BlenderBot 3, trained on 6M such interactions from participating users of the system. BlenderBot 3x is both preferred in conversation to BlenderBot 3, and is shown to produce safer responses in challenging situations. We then discuss how we believe continued use of these techniques -- and improved variants -- can lead to further gains.