Introduction

The question to be answered in this WP is How to integrate learning, reasoning and optimisation? The AI community has studied the abilities to learn, to reason and to optimize largely independent of one another, and has been divided into different schools of thought or paradigms [1]. This is often described using confrontational terms such as: System 1 versus System 2 [2], subsymbolic vs. symbolic, learning vs. reasoning, knowledge-based vs. data-driven, model-based versus model-free, logic or symbolic versus neural, low vs high-level, etc.. No matter which terminology is used, the terms often refer to very similar distinctions and the state-of-the-art is that each paradigm can be used to solve certain tasks but not for others. For instance, the symbolic AI or the logic paradigm has concentrated on developing sophisticated and accountable reasoning methods, the subsymbolic or neural approaches to AI have concentrated on developing powerful architectures for learning and perception, and constraint and mathematical programming have been used for combinatorial optimisation. While deep learning provides solutions to many low-level perception tasks, it cannot really be used for complex reasoning; while for logical and symbolic methods, it is just the other way around. Symbolic AI may be more explainable, interpretable and verifiable, but it is less flexible and adaptable. Trustworthy AI cannot rely on a single paradigm, it must have all the above mentioned abilities: it must be able to learn, to reason and to optimise. Therefore the quest for integrated learning, reasoning and optimisation abilities boils down to computationally and mathematically integrating different AI paradigms.

The most apparent difference between paradigms lies in the representations that are used and so one operational way to answer the question is to tightly integrate different representations as to offer learning, reasoning and optimisation in a common framework. This WP will therefore design representational systems with accompanying inference, learning and optimisation algorithms that can support trustworthy artificial intelligence. The integrated or “unified” representations should enable to address the whole AI cycle from low-level perception to high-level reasoning, they should be able to use data as well as knowledge, and most of all, should produce trustworthy AI. W.r.t trustworthiness of representations, the most critical dimension is their explainability. Explainability is also a requirement for the other dimensions of trustworthiness, which explains why this WP will focus on that dimension of trustworthiness. The quest for integrated representations and paradigms in artificial intelligence is akin to systems biology in the sense that it aims at understanding AI by putting the pieces together, rather than focussing on the individual representations and building blocks. It thus constitutes a kind of systems AI. Just like in systems biology this involves working at different levels of abstraction.

The work on integrated representations and abilities has been fragmented into activities in many specialised subcommunities all with their specialized workshops (such as KR2ML, NeSy, StarAI, which TAILOR will bring together). Neuro-Symbolic Learning Computation (NeSy) wants to bridge the gap between neural networks and logical and symbolic approaches to reasoning. Although many promising NeSy models and representations [3] have been introduced recently, they are still severely limited in their reasoning and explanation abilities because they have either pushed the logic inside the neural network, or use logic constraints to regularize the network instead of for reasoning. Furthermore, they do not yet support realistic applications, certainly not applications that require perception and the use of knowledge for high-level reasoning. They also do not scale well. Statistical Relational AI (StarAI) [4] extends probabilistic graphical models with first order logic, and is in fact pursuing similar goals a probabilistic programming. Its selling point is that integrates probabilistic and logical reasoning with statistical learning. Despite its successes and its applications in domains such as network analysis (in particular, entity resolution and link prediction), natural language processing and bioinformatics, important challenges remain such as using StarAI for real-time perception and acting, and scalability. Some of these challenges could be addressed by integrations with neural networks. TAILOR will take a more general approach to learning and reasoning than both NeSy and StarAI in that it will consider the integration of any subset of the mentioned representations for learning and reasoning. For what concerns learning and optimisation, the constraint programming community has contributed frameworks [5], such as empirical model learning (EML), that learn constraint satisfaction and constraint programming models from the available data and then use these models for selecting the optimal decisions using combinatorial optimisation. Milano considers “EML ... as a technique to merge predictive and prescriptive analytics.” Going one step further, ML may allow to learn the behavior of an optimization system in general, which is related to the theme of WP7 in that machine learning can learn to configure solvers.

Low-level perception and computer vision. Deep learning has contributed solutions to numerous traditional computer vision tasks. But integrating the vision with reasoning is still an open problem as witnessed by the many challenging datasets on reasoning in a computer vision context [6] .

Embeddings and ontological reasoning. Embeddings [7] are among the most powerful techniques in deep learning and they are routinely applied in numerous applications concerning natural language and knowledge graphs. Although it has been shown that embeddings can be used for simple inference tasks [8], it is still unclear how to combine them to support multi-step reasoning [9].

Explainable AI (XAI) [10] has focussed on making machine learning models more transparent and explainable. Especially relevant to the present WP are systems that use logical inference on domain models (ontologies, knowledge bases, knowledge graphs, etc.) and knowledge representation and reasoning techniques in order to explain the results of a neural network, and more generally, machine learning.

[1] Domingos, P. Basic books, 2015.

[2] Kahneman, D. Thinking Fast and Slow, 2013.

[3] Donadello et al, IJCAI 17; Manhaeve et al. NeurIPS 18; Rocktaschel et al. NIPS 17; Evans, R. et al, JAIR 2018.

[4] De Raedt, L. et al. Morgan Claypool 16; Russell, S. CACM 15.

[5] Lombardi, M. i et al. Artificial Intelligence, 2017; Bessiere, C. et al. Artificial Intelligence 2017.

[6] Yi, K. et al. NeurIPS 18; Zhou, B. et al. arXiv:1608.05442, 2016; Krishna, R. et al. arXiv:1602.07332.

[7] Pennington, J. et al. EMNLP 2014.

[8] Xie, Y. et al. NeurIPS 19, also arXiv:1909.01161v4.

[9] Trouillon, T. et al, JAIR 2019; DeYoung, J. et al, arXiv:1911.03429,2019

[10] Guidotti, R. et al. ACM Comput Surveys, 2019.

Objectives

The question addressed in this WP is how to integrate learning, reasoning and optimisation, that is, how to computationally and mathematically integrate different AI paradigms. The most apparent difference between paradigms lies in the representations that are used and so an operational way to answer the question is to tightly integrate different representations as to offer both learning, reasoning and/or optimisation in common frameworks. This theme will therefore design representational systems with accompanying inference, learning and optimisation algorithms that can support trustworthy artificial intelligence (especially along the dimension of explainability). It will also study applications in two different domains. The WP is divided into four main Tasks, and is connected to other WPs by two tasks.

Research Challenges

  • Integrating or unifying different representations, i.e. (subsets of) representations -- logic, probability, constraints, neural models, for learning and reasoning. Scaling up inference and learning algorithms for such representations. Example: develop NeSy system that can perform both logical inference and deep learning in the same way that pure logic and pure deep learning do.

  • Explainability & Trustworthiness of integrated representations. Provide explanation methods that focus on understanding the rationales, contexts and interpretations of the model using domain knowledge rather than relying on the transparency of the internal computational mechanism.

  • Learning for combinatorial optimisation and decision making. Example: learn and continuously adapt a model to schedule tasks in a data centre in order to minimize electricity consumption.

  • Showcase applications in using domain knowledge in learning. Combining learning and complex reasoning (over knowledge graphs and ontologies). Example: improve the work on predicting sentence entailment where neural approaches have recently been debunked.

  • Showcase applications in perception, spatial reasoning, robotics and vision. Combining high-level reasoning and low-level perception. Example: use common-sense knowledge to decide which item in an image is a real object and which is itself a picture.