ACM Multimedia 2020 Workshop

16th October, 2pm CET

Multimodal Conversational AI


ACM MuCAI will be an online workshop at ACM Multimedia on the 16th of October at 2pm CET.

Please, see the details of how to attend the workshop on the ACM Multimedia Website.

We are excited with the two excelent keynotes that promisse to look at multimodal dialog systems from very different standpoints.


Augmenting Machine Intelligence
with Multimodal Information

Prof. Zhou Yu (University of California at Davies, USA)

Response Generation and Retrieval in
Multimodal Conversational AI
Prof. Verena Rieser (Heriot-Watt University, UK)


We look forward to these valuable and insightful talks by two leading experts.

We invite attendees to participate in the Vision and Challenges in Multimodal Conversational AI panel discussion. The panel will address technological, ethical, legal and social aspects of conversational search. Both audience and panelist are invited to engage in a brainstorm session to discuss all these aspects.

Panelists

Prof. Alex Rudnicky (Carnegie Mellon University, USA)

Prof. Zhou Yu (University of California at Davies, USA)

Prof. Verena Rieser (Heriot-Watt University, UK)

Prof. Xavier Alameda-Pineda (INRIA)

The articles accepted in Workshop have actively addressed several of relevant challenges in multimodal conversational AI: How to include assistive-technologies in dialog systems? How can agents engage in negotiation in dialogs? and how to handle emotion in multimodal dialog?

Together with the keynotes and the panel, the papers show how the community is exploring a very diverse set of research challenges.


Have a look at the Workshop Schedule!

Scope

Topics

Recently, conversational systems have seen a significant rise in demand due to modern commercial applications using systems such as Amazon's Alexa, Apple's Siri, Microsoft's Cortana and Google Assistant. The research on multimodal chatbots is a widely underexplored area, where users and the conversational agent communicate by natural language and visual data.

Conversational agents are now becoming a commodity as a number of companies push for this technology. The wide use of these conversational agents exposes the many challenges in achieving more natural, human-like, and engaging conversational agents. The research community is actively addressing several of these challenges: how are visual and text data related in user utterances? How to interpret the user intent? How to encode multimodal dialog status? What are the ethical and legal aspects of conversational AI?

The Multimodal Conversational AI workshop will be a forum where researchers and practitioners share their experiences and brainstorm about success and failures in the topic. It will also promote collaboration to strengthen the conversational AI community at ACM Multimedia.

  • Design and evaluation of conversational agents

  • User-Agent experience design

  • Preference elicitation in conversational agents

  • Recommendations in conversational systems

  • User-agent legal and ethical issues in conversational systems

  • Multimodal user intent understanding

  • Visual conversations/dialogs

  • Opinion recommendation in conversational agents

  • Deep learning for multimodal conversational agents

  • Conversation state tracking models and online learning

  • Supply/demand in conversational agents for e-commerce

  • Reinforcement learning in conversational agents

  • Resources and datasets

  • Conversational systems applications, including, but not limited to, e-commerce, social-good, music, Web search, healthcare.