Dr. Tathagata Chakraborti, Research Staff Member, IBM Research, Cambridge MA, USA
Abstract: End to end learning models are not reliable enough for deployment in user-facing applications while the state-of-the-art process of the design of chatbots in enterprise settings do not scale. The end result is that currently, automated customer support agents are quite poor in user experience. In this talk, we will explore how automated planning techniques can provide a happy middle ground in this space. We will also talk about how the new setup can provide explainability tools that allow the developer to be firmly in control over the behavior of the deployed bot.
Speaker bio: Dr. Tathagata Chakraborti is with IBM Research. His research focuses on human-AI interaction and explainable AI. He received his Ph.D. from Arizona State University in 2018 with a CIDSE Graduate Student of the Year Award and an honorable mention for the Best Dissertation Award from ICAPS, the premier conference on automated planning and scheduling. His research focuses on taking core research in automated planning to prototypes and applications, winning the Best System Demo Award thrice at ICAPS, and leading a team to the US Finals of the Microsoft Imagine Cup in 2017. He became one of IEEE’s AI’s 10 to Watch in 2020, and received multiple awards from IBM for his leadership in IBM's cognitive and cloud platform and contributions towards the open-source software community.
Dr. John Licato, Assistant Professor, University of South Florida
Abstract:
Argumentation is a fundamental mode of communication for human beings, and one which, as of yet, remains outside of AI's sphere of mastery. This is not for lack of effort: GPT-3 has purportedly written an argumentative essay, and IBM's Project Debater performed well against (but did not defeat) a human expert in live debate. It is assumed by many that as chatbots continue to advance, their ability to engage in and understand human-like argumentation is inevitable. But we must take a step back and ask ourselves---Is this really what we want? In this talk, I will briefly summarize the many inefficiencies of dialogue-based argumentation and present an alternative: a controlled form of argumentative exchange that upholds principles of normative reasoning and can be used as an internal model of "good" argumentation by dialog agents.
Speaker bio: Dr. John Licato is an assistant professor in the Computer Science and Engineering department of the University of South Florida (USF), and the founder/director of the Advancing Machine and Human Reasoning (AMHR) Lab. He obtained his PhD in Computer Science from Rensselaer Polytechnic Institute (RPI) in May 2015, working under Professor Selmer Bringsjord and specializing in the computational modeling of analogical reasoning. His research interests are in human-level and logical reasoning; particularly, the kind of reasoning that we normally refer to as cognitive such as the computational modeling of cognitive reasoning, computational cognitive architectures, analogical, deductive, and hypothetico-deductive reasoning, to name a few. He is the recipient of the 2015 AFOSR Young Investigator's Program award.
Dr. Ahmed Awadallah, Principal Research Manager, Microsoft Research
Abstract:
Automatic translation of natural language to programs to interact with data and services has been the ``holy grail" of human-computer interaction, information retrieval and natural language understanding for decades. While early attempts of building natural language interfaces (NLIs) have struggled to achieve the expected success, the last several years have seen a major resurgence of interest in them. The horizon of NLIs has also expanded from databases to knowledge bases, robots, Internet of Things (via virtual assistants like Siri and Alexa), Web service APIs, general programmatic contexts and more. In this talk, I will describe some of our recent work that aims to improve natural language interfaces along two dimensions (a) increase accuracy by leveraging rich contextual data, and (b) enable users to interactively use the system, refining and repairing mistakes to reach their goals.
Speaker bio: Dr. Ahmed Awadallah is a Principal Research Manager at Microsoft Research where he leads the Language & Information Technologies team. His research has sought to understand how people interact with information and to enable machines to understand and communicate in natural language (NL) and assist with task completion. His early work spanned topics including information extraction, sentiment analysis, modeling conversations and their social context, and measuring and improving search systems through user behavior modeling. More recently, his research has focused on building NLP systems with limited annotated data (transfer learning and learning from weak supervision and user interactions) and building NL interfaces to services and data. Ahmed’s contributions to NLP and IR have recently been recognized with the 2020 Karen Spärck Jones Award from the British Computer Society. Ahmed regularly serves as (senior) committee, area chair, guest editor and editorial board member at many major NLP and IR conferences and journal.
Dr. Kalai Ramea, Research Area Manager, Palo Alto Research Center (PARC), Xerox
Abstract:
The COVID-19 pandemic and the lockdown measures to control the spread has had a tremendous economic impact, with millions of Americans losing their jobs. As a result, we are witnessing an unprecedented number of people filing for unemployment benefits. The benefits filing system is complex with numerous eligibility criteria, and the public agencies are not equipped to handle this level of traffic and engagement. This has resulted in many getting frustrated due to the lack of pertinent information through the official channels. BEBO, an abbreviation for 'Benefits Bot,' is a chatbot designed to assist unemployed people, especially those who lost their jobs due to the COVID-19 pandemic, to obtain benefits-related information easily. We did an extensive user study to understand who seeks benefits during this pandemic and their pain points to make the bot user-centric and accessible to the most vulnerable groups. The initial version of the BEBO was developed for California and is now being scaled up to include other states in the US.
Speaker bio: Dr. Kalai Ramea is a Research Area Manager at the Palo Alto Research Center (PARC), Xerox. She obtained her Ph.D. in Operations Research from the University of California, Davis in 2016. She is the recipient of the Helene M. Overly Memorial Graduate Scholarship and was invited to be part of the National Academy of Engineering’s US Frontiers of Engineering Symposium, one of 100 young engineers around the nation selected to participate in the symposium. She was recently featured in the Women leading the AI Industry series by Authority Magazine. Her interests broadly lie in machine learning, statistics, and quantitative modeling and is currently exploring novel modeling frameworks in artificial intelligence for climate and energy systems.
Dr. Sudeep Sarkar, Professor, and Chairperson, Department of Computer Science and Engineering, University of South Florida, Tampa, FL
Abstract: Events are central to the content of human experience. From the constant stream of the sensory onslaught, the brain segments, represents aspects related to events, and stores memory for future comparison, retrieval, and re-storage. Contents of events consist of objects/people (who), location (where), time (when), actions (what), activities (how), and intent (why). Many deep learning-based approaches extract this information from videos. However, most methods cannot adapt much beyond what they were trained and are incapable of recognizing new events beyond what they were explicitly programmed or trained. The main limitation of current event analysis approaches is the implicit closed world assumption. The ability to support open world inference is limited by the source of semantics, which are the annotations that come with the training data. In this talk, I will share our successes using symbolic knowledge bases such as ConceptNet as semantics to create video interpretations. It allows the expansion of the set of primary objects and actions to a very large (not infinite) set without the need for massive, annotated training sets. It is a synthesis of symbolic reasoning and neural approaches, i.e., a hybrid neuro-symbolic approach. We accomplish this synthesis using Grenander’s energy-based pattern theory engine. We will see how Grenander’s pattern theory-based canonical representation offers an elegant, flexible, compositional mechanism. The most popular probabilistic symbolic approaches, such as Bayesian networks and Markov random fields, are special cases of Grenander’s canonical representations. They naturally model semantic connections between what is observed directly in the image and prior knowledge in large-scale commonsense knowledge bases, such as ConceptNet. We will share results that show this hybrid approach can match (and sometimes exceed) the performances of the best end to end deep learning approaches.
Speaker bio: Dr. Sudeep Sarkar is a Professor and Department Chair of Computer Science and Engineering and Associate Vice President for Research & Innovation at the University of South Florida (USF) in Tampa, FL. He received his MS and Ph.D. degrees in Electrical Engineering, on a University Presidential Fellowship, from The Ohio State University. His research interests span areas of computer vision from video image processing to biometrics and medical image analysis of burn scars. He has made seminal algorithmic and theoretical contributions to the field of computer vision, particularly in the problem of computing perceptual organization, sign language recognition, and more recently in event understanding using pattern theory. He has also made many seminal contributions to the field of biometrics and is considered the world leader in gait biometrics and is frequently called upon to participate in world meetings regarding this topic. The benchmark developed by him is the defacto standard in the development of gait recognition algorithms. He is a Fellow of the American Association for the Advancement of Science (AAAS), Institute of Electrical and Electronics Engineers (IEEE), American Institute for Medical and Biological Engineering (AIMBE), and International Association for Pattern Recognition (IAPR); and a charter member and member of the Board of Directors of the National Academy of Inventors (NAI).
Venkataraman Sundareswaran, MCHC Fellow, AI & Machine Learning, World Economic Forum
Panelist bio: Sundar is an Artificial Intelligence Fellow at the World Economic Forum, where he is co-creating a governance framework with a multi-stakeholder community for the use of Chatbots in healthcare. He represents Mitsubishi Chemical Holdings Corporation in this role at the Forum’s Centre for the Fourth Industrial Revolution. Sundar is a seasoned technologist with research, development, P&L and executive leadership experience. With a Master’s degree in Natural Language Understanding and a PhD in Computer Vision, Sundar made numerous research contributions in robotics, neural networks, human computer interaction, virtual/augmented reality and autonomous vehicles, prior to taking leadership roles in advanced technology production facilities. He is passionate about responsible deployment of novel technologies in societally important areas such as healthcare.