29 July 2023 (Saturday)
Neural Conversational AI Workshop
What’s left to TEACH (Trustworthy, Enhanced, Adaptable, Capable and Human-centric) chatbots?
ICML 2023 , Hybrid in Hawaii + Zoom
Arvind's Talk is uploaded!
Arvind was not able to give the talk as he was not feeling well, so here we are sharing his previous talk at Oxford ML with his permission, which he has said mostly overlaps with the talk he had planned to give.
Join us for the Post Workshop Celebration 🎉🎉🎉
Hard Rock Cafe, 280 Beach Walk, Honolulu, HI 96815
Summary
The recent breathtaking progress made in generative natural language processing (NLP) has been propelled by large language models and innovative learning methods that intersects machine learning (ML) and NLP such as Reinforcement Learning with Human Feedback (RLHF), leading to the creation of impressive chatbots like ChatGPT and Bard. However their lack of groundedness, factuality, and interoperability with tools and APIs limits them to mostly creative endeavors due to low fidelity and reliability. On the contrary, digital assistants in the real world such as Siri, Alexa, and Google Assistant can interface with proprietary APIs, but they still cover a relatively narrow set of use cases that are mostly simple single-turn interactions. Through the combination of each of their strengths, the goal of deploying truly conversational and capable digital assistants that are also trustworthy seems tantalizingly close. What are the remaining challenges for this goal, and how can the ML and NLP communities come together to overcome them? To answer this question, we identify the following areas of interest:
Model Enhancement Techniques: What are new methods for enhancing dialogue models in the context of large language models and instruction fine-tuned models? What are effective approaches to fulfill their corresponding data requirements?
Adaptation: How can we keep models up-to-date with evolving world knowledge and calibrate them to use tools and APIs? How can we continually learn from interactions? How can we adapt dialogue models to align with personalized user preferences or specialized purposes?
Trustworthiness and Evaluation: How can we efficiently proxy a dialogue model’s real-world performance beyond expensive human evaluation? How should we define robustness, fairness, safety, and fidelity for dialogue models? How can we enhance and systematically evaluate their performance along these axes? How should the model behave when prompted with sensitive topics or toxic behavior? How can we detect and avoid private information leaking from interactions with models?
Expanding Human-centric Capabilities: How can we enhance dialogue models to provide more natural conversational experiences that can accommodate multimodal signals, such as image, video, and audio? How can they support multi-party dynamic interactions instead of only 1:1 turn-by-turn interactions? How can we extend to multilingual dialogue models?
The goal of this workshop is to bring together machine learning researchers and dialogue researchers from academia and industry to encourage knowledge transfer and collaboration on these topics to discover ideas that can expand the use cases of conversational AI. The ideal outcome of the workshop is to identify a set of concrete research directions to enable the next generation of digital assistants.
Important Dates
Abstract Submissions Due: 05/24/2023 (AOE)
Paper Submissions Due: 05/24/2023 (AOE) 05/26/2023 (AOE)
Notification of Acceptance: 06/19/2023
Camera-ready Paper Due: 07/07/2023
Workshop Date: 07/29/2023 (Saturday)
Invited Speakers
Emily Dinan
DeepMind
Pascale Fung
Hong Kong University of
Science & Technology
Arvind R Neelakantan
OpenAI
João Sedoc
New York University
Marilyn Walker
University of California
Santa Cruz
Jason Weston
Meta AI
Zhou Yu
Columbia University
Contact
Please contact the organizing committee at teach2023@googlegroups.com if you have any questions.