The tutorial "Towards Next-Generation Intelligent Assistants for AR/VR Devices" will be hosted at the WebConf 2023, which will take place in Austin on April 30, 2023.
The emergence of AR/VR devices raised many new challenges for building intelligent assistants. The unique requirements have inspired new research directions such as (a) understanding users’ situated multi-modal contexts (e.g. vision, sensor signals) as well as language-oriented conversational contexts, (b) grounding interactions on growing external and internal knowledge graphs, and (c) developing inference models with on-device constraints and privacy-secured methods.
In this tutorial, we will provide an in-depth walk-through of techniques in the afore-mentioned areas in the recent literature. We aim to introduce techniques for researchers and practitioners who are building intelligent assistants, and inspire research that will bring us one step closer to realizing the dream of building a next generation intelligent assistant.
The tutorial slides are here.
Luna Dong
(Meta Reality Labs)
Zhou Yu
(Columbia University)
Shane Moon
(Meta Reality Labs)
Ethan Xu
(Meta Reality Labs)
Kshitiz Malik
(Meta Reality Labs)
Location: AT&T Hotel and Conference Center - Classroom #104
All schedule is in CDT (Austin Time).
9:00 - 9:20 AM: Introduction (Luna)
9:20 - 9:45 AM: Conversational AI (Zhou)
9:45 - 10:30 AM: Multimodal ConvAI (Shane)
10:30 - 10:50 AM: (Break)
10:50 - 11:35 AM: Personalized & Knowledge-enhanced (Ethan)
11:35 AM - 12:05 PM: On-device & Federated Learning (Kshitiz)
12:05 - 12:20 PM: Conclusions (Luna)