Keynotes

Dr. Phil Cohen, Professor and Director, Laboratory for Dialogue Research, Faculty of Information Technology, Monash University, Australia

Towards Collaborative Dialogue

Abstract: This talk will discuss a program of research for building collaborative dialogue systems, which are a core part of virtual assistants. I will briefly discuss the strengths and limitations of current approaches to dialogue, including neural network-based and slot-filling approaches, but then concentrate on approaches that treat conversation as planned collaborative behaviour. Collaborative interaction involves recognizing someone’s goals, intentions, and plans, and then performing actions to facilitate them. People have learned this basic capability at a very young age and are expected to be helpful as part of ordinary social interaction. In general, people’s plans involve both speech acts (such as requests, questions, confirmations, etc.) and physical acts. When collaborative behavior is applied to speech acts, people infer the reasons behind their interlocutor’s utterances and attempt to ensure their success. Such reasoning is apparent when an information agent answers the question “Do you know where the Sydney flight leaves?” with “Yes, Gate 8, and it’s running 20 minutes late.” It is also apparent when one asks “where is the nearest gas station?” and the interlocutor answers “2 kilometers to your right” even though it isn’t the closest, but rather the closest one that is open. In this latter case, the respondent has inferred that you want to buy gas, not just to know the location of the station. In both cases, the literal and truthful answer is not cooperative. In order to build systems that collaborate with humans or other artificial agents, a system needs components for planning, plan recognition, and for reasoning about agents’ mental states (beliefs, desires, goals, intentions, obligations, etc.). In this talk, I will discuss current theory and practice of such collaborative belief-desire-intention architectures, and demonstrate how they can form the basis for an advanced collaborative dialogue manager. In such an approach, systems reason about what they plan to say, and why the user said what s/he did. Because there is a plan standing behind the system’s utterances, it is able to explain its reasoning. Finally, we will discuss potential methods for incorporating such a plan-based approach with machine-learned approaches.

Speaker bio: Dr. Phil Cohen has long been engaged in the AI subfields of human-computer dialogue, multimodal interaction, and multiagent systems. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and a past President of the Association for Computational Linguistics. Currently, he directs the Laboratory for Dialogue Research at Monash University. Formerly Chief Scientist, AI and Sr. Vice President for Advanced Technology at Voicebox Technologies, he has also held positions at Adapx Inc (founder), the Oregon Graduate Institute (Professor), the Artificial Intelligence Center of SRI International (Sr. Research Scientist and Program Director, Natural Language Program), Fairchild Laboratory for Artificial Intelligence, and Bolt Bernanek and Newman. His accomplishments include co-developing influential theories of intention, collaboration, and speech acts, co-developing and deploying high-performance multimodal systems to the US Government, and conceiving and leading the project at SRI International that developed the Open Agent Architecture, which eventually became Siri. Cohen has published more than 150 refereed papers, with more than 16,900 citations, and received 7 patents. His paper with Prof. Hector Levesque “Intention is Choice with Commitment” was awarded the inaugural Influential Paper Award from the International Foundation for Autonomous Agents and Multi-Agent Systems. Most recently, he is the recipient of the 2017 Sustained Accomplishment Award from the International Conference on Multimodal Interaction. At Voicebox, Cohen led a team engaged in semantic parsing, and human-computer dialogue.

Prof. Amit Sheth, AAAI and IEEE Fellow, Knoesis, Wright State University, USA

Towards smart chatbots for enhanced health: using multisensory sensing, semantic-cognitive-perceptual computing for monitoring, appraisal, adherence to intervention

Abstract:

Understanding and managing health is complex. Throughout the last few decades of modern medicine, we have relied clinicians on most health-related decision making. New technologies have enabled a growing involvement of patients in their own health management, aided by increasing variety and amount of patient-generated health data. Augmented personalized health [http://bit.ly/k-APH, http://bit.ly/APH-HI] strategy has outlined a broad variety of patient and clinician engagement in devising an increasingly more sophisticated and powerful health management solutions - from self-monitoring, self-appraisal, self-management, intervention to the prediction of disease progression and planning. Chatbot could play a pivotal role throughout the unfolding data-driven, AI-supported ecosystem [http://bit.ly/H-Chatbot] that engages patients and clinicians in collecting data, in driving their actions, informing them of their choices, and even delivering part of the clinical care (e.g., Cognitive Behavioral Therapy (CBT) for mental health patients). Nevertheless, this will require quite a few advances in making a more intelligent technology. In this talk, we will share some experience and observations based on our ongoing collaborative projects that usually involve clinicians and patients targeting pediatric asthma management, pre-and-post bariatric surgery care regimen, depression and other mental health issues, and nutrition. Using use cases and prototypes, we will elucidate the need, support, and use of domain- and user-specific knowledge graphs, Natural Language Processing (NLP), machine learning, and conversational AI for:

  • multimodal interactions including text, voice, and other media, along with the use of diverse devices and software platforms for “natural” communication
  • context enabled by deep relevant medical/healthcare knowledge including clinical protocols
  • personalization by collecting and using the history of the individual patient from IoT health devices, open data, and Electronic Medical Record (EMR)
  • abstraction by aggregating and correlating diverse streams data to draw plausible explanation(s) based on public (cohort-level) data (for example percentage of asthmatic patient who gets symptom when exposed to certain triggers) and personal data
  • smart dialogue (intent) management and response generations by causal relations and inference of association

Speaker bio: Amit Sheth is an educator, researcher, and entrepreneur. He is the LexisNexis Ohio Eminent Scholar, an IEEE Fellow, an AAAI Fellow, an AAAS fellow, and the executive director of Kno.e.sis—the Ohio Center of Excellence in Knowledge-enabled Computing. Kno.e.sis is a multidisciplinary Ohio Center of Excellence in BioHealth Innovation. Amit is working towards a vision of Computing for Human Experience enabled by the capabilities at the intersection of AI (semantic, cognitive, and perceptual computing), Big and Smart Data (exploiting multimodal Physical-Cyber-Social data), and Augmented Personalized Health. http://knoesis.org/amit

Jim Dewan, Solution Architect, IBM Support Transformation Team, USA

Using Conversation Agents for Customer Support at Scale - the IBM Case Study

Abstract: IBM offers thousands of products and services to its customers and is always seeking new ways to improve customer service. Over the past 4 years, it started exploring and deploying conversation agent technology for customer support and has today over sixty customer-facing automated agents deployed for over two hundred major products, becoming the leading user of such agents in the industry. The usage of conversation agents has improved customer satisfaction while reducing investment for customer support. This talk will demonstrate the underlying technology and discuss some of the lessons learnt. Specifically, we show how IBM Support was able to rapidly construct chatbots using Watson Assistant, Watson Discovery and a crowdsourced knowledge map that encourages asset reuse without the need of relying solely on manually curated questions and responses. The auto-generated chat interaction allows for a multi-turn experience which converses with a client to identify their intent and search category to quickly locate an asset to resolve an issue or answer a question.

Speaker bio: Jim Dewan is a solution architect from the IBM Support Transformation Team. He has over 29 years of experience ranging from development, support, project lead, architect, test and customer liaison. Over the last two years, he has designed, developed and deployed a Watson solution on the new IBM Salesforce support site. As project leader of this strategic solution, he has managed and designed requirements, lead the agile development process and worked closely with IBM teams who leverage the solution to best meet their customer self-service needs. Jim is focused on developing a model for bi-directional knowledge sharing between support agents and AI to enhance both self-service and traditional support methods.