NLP for Conversational AI
Co-located with ACL 2024 @ Thailand
Over the past decades, mathematicians, linguists, and computer scientists have dedicated their efforts towards empowering human-machine communication in natural language. While in recent years the emergence of virtual personal assistants such as Siri, Alexa, Google Assistant, Cortana, and ChatGPT has pushed the field forward, they may still have numerous challenges.
Following the success of the 5th NLP for Conversational AI workshop at ACL, "The 6th NLP4ConvAI” will be a one-day workshop, co-located with ACL 2024 in Thailand. The goal of this workshop is to bring together researchers and practitioners to discuss impactful research problems in this area, share findings from real-world applications, and generate ideas for future research directions.
The workshop will include keynotes, posters, panel sessions, and a shared task. In keynote talks, senior technical leaders from industry and academia will share insights on the latest developments in the field. We would like to encourage researchers and students to share their prospects and latest discoveries. There will also be a panel discussion with noted conversational AI leaders focused on the state of the field, future directions, and open problems across academia and industry.
Find conversational AI exciting? We are looking forward to seeing you!
News
Website launched.
Submission deadline has been updated.
Invited speakers confirmed.
Program confirmed.
Previous Workshops
5th NLP4ConvAI @ACL 2023
4th NLP4ConvAI @ACL 2022
3rd NLP4ConvAI @ EMNLP 2021
2nd NLP4ConvAI @ ACL 2020
1st NLP4ConvAI @ ACL 2019
Important Dates
Paper Submissions Deadline: June 2nd, 2024 (Anywhere on Earth) (previously: May 17th, 2024)
Notification of Acceptance: June 20th, 2024 (previously: June 17th, 2024)
Camera-ready Paper Due: July 1st, 2024
Workshop Date: August 16th, 2024
Invited Speakers
Keynote: Knowledge Base Question Answering: Case Studies in Transfer Learning and Unanswerability
Most existing KBQA models only study heavily supervised in-domain settings where all questions are answerable. This severely limits the real-world applicability of these systems. In this talk we study both these problems: (1) transfer learning where only a small in-domain training data is available along with larger out-of-domain training set, and (2) robust KBQA, where questions may not be answerable given the KB. In addition to providing new datasets, we build a series of models using both small and large language models for both these tasks. In our final result, we are able to devise GPT4-based workflows for datasets that include unanswerable questions, which can be developed with very little in-domain training data.
Keynote: Challenges and Opportunities in Developing Conversational AI for Fintech
Building conversational AI assistants for finance with Large Language Models (LLMs) brings both opportunities and challenges. This talk delves into adapting general-purpose LLMs to the unique needs of financial chatbots, focusing on making results reliable and increasing chatbot utility. We'll explore strategies for detecting and mitigating hallucinations, measuring and communicating uncertainty, and striking the balance between helpfulness and restraint. Additionally, we'll examine how AI can offer personalized financial advice by leveraging user profiles, aligning with industry best practices, and managing uncertainty in complex interactions. We'll also discuss data challenges and suggest avenues for future research in this exciting field.
Keynote: Factual and Faithful Generation with Knowledge Retrieval
Despite demonstrating remarkable flexibility and capability, large language models (LLMs) frequently produce responses that are inconsistent or factually inaccurate. Recently, retrieval-augmented generation (RAG) has emerged as a popular mitigating strategy by equipping LLMs with relevant knowledge, but it still falls short of providing a comprehensive solution. In this presentation, I will discuss our recent endeavors to improve the factual accuracy and faithfulness of LLMs. This includes an overview of how we can automatically assess the factuality of LLM responses and encourage LLMs to rely more heavily on explicit knowledge when generating responses. Additionally, I will introduce a novel factuality-aware alignment method, comprising factuality-aware SFT and factuality-aware RL through direct preference optimization. Our approach guides LLMs to produce more factual responses while maintaining their ability to follow instructions. Finally, I will conclude by discussing key open problems and outlining directions for addressing them in the near future.
Keynote: Mixture-of-Agents Enhances Large Language Model Capabilities
Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and generation tasks. With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction. Toward this goal, we propose a new approach that leverages the collective strengths of multiple LLMs through a Mixture-of-Agents (MoA) methodology. In our approach, we construct a layered MoA architecture wherein each layer comprises multiple LLM agents. Each agent takes all the outputs from agents in the previous layer as auxiliary information in generating its response. MoA models achieves state-of-art performance on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. For example, our MoA using only open-source LLMs is the leader of AlpacaEval 2.0 by a substantial gap, achieving a score of 65.1% compared to 57.5% by GPT-4 Omni.