R2K: Integrating learning of Representations and models with deductive Reasoning that leverages Knowledge
27th October 2018 - Tempe, Arizona
Co-located with Knowledge Representation (KR) 2018
27th October 2018 - Tempe, Arizona
Co-located with Knowledge Representation (KR) 2018
Engineering intelligent agents that operate autonomously and effectively in social settings with humans are expected to learn, think, reason, understand, act and explain their actions like humans do. The goal of this workshop is to spur research into building systems that integrate bottom-up data-driven, inference and learning mechanisms (including deep learning, which has made tremendous progress in the recent past) with top-down reasoning mechanisms involving common-sense knowledge, domain knowledge, and explicit (could be symbolic) representations of the world. This reasoning needs to be explainable in terms of a sequence of iteratively more abstract semantics grounded in data and the world models used for the reasoning purpose.
The above need is acutely felt and widely acknowledged in the fields of AI and its application domains like CV, NLP, robotics etc. For example, to develop a text understanding system, the system not only needs to be able to do semantic parsing (that often involves bottom-up methods trained over a large corpus) but also needs to reason with relevant common sense and background knowledge. Winograd challenge is a good test bed for this. Similarly, image/video understanding systems as well as associated VQA systems can benefit from integrating bottom-up perception methods with top-down reasoning approaches that utilize common sense (domain) knowledge. Robotic systems can not only learn how and when to act through interactions, but their learning can be enhanced through verbal teaching, human demonstrations, and a combination of them. These again require both text and image understanding with top-down guidance, as well as reasoning given the background knowledge about the environment.
There are several challenges in developing such systems. Creation of knowledge representations of the world and updating them, exploiting them to form mental models in the working memory, generation of working hypotheses which are then utilized to effectively direct attention for evidence verification and belief revision (if needed), and control mechanisms for the same are needed for not only solving higher level reasoning problems, but also to think like humans and to impart explainability to deductions made on the sensory data. For example, if we were to have an end-to-end deep learning based system, then there is a need to be able to bake explicit knowledge into such a system. One also has to figure out how to make the overall system explainable. On the other hand, if we have a pipeline of learning and reasoning modules, then joint optimization or knowledge distillation is a challenge. Furthermore, a major challenge is to develop a carefully curated set of shared tasks that emphasize the above issues. The shared tasks would involve, in addition to the task definitions, carefully collected (large) annotated datasets and meticulously designed evaluation mechanisms. These tasks would then be opened to the research community and run as industry and academia funded competitions.
There are several kinds of shared tasks that we intend to consider. For example, tasks that need task-specific models to be constructed on the fly to solve the problem; tasks where the training and test tasks involve different data modalities; multi-tasks where training and test tasks are different but share common large vocabularies etc. We will specifically consider tasks and approaches that lead to plausible hypotheses to be tested by computational neuroscience. We will design evaluation metrics which penalize answers which are not explainable in terms of layers of abstracted semantics grounded in the data.
In this first workshop, we intend to
Please see call for papers for submission instructions.