Task 5.6
Fostering the AI scientific community on the theme of deciding and learning how to act
Task 5.6: Fostering the AI scientific community on the theme of deciding and learning how to act - M1-M36, Task Led: RWTH (Gerhard Lakemeyer): See Instrument 3, Section 1.3.2.3 for a description.
1.3.2.3 Instrument 3: Basic research program
In the preface to his second book devoted to robots, Isaac Asimov in 1964 says that “Knowledge has its risks, but should our reaction be to stop at risk? Or should we not rather use knowledge to make it a barrier against the same risks that it entails? Knives are made with a handle so that they can be grasped without danger; stairs are equipped with railings; electrical wires are insulated; pressure cookers have a safety valve; in every product we take care to minimize the risk. Sometimes the safety achieved is insufficient, due to limitations imposed by the nature of the universe or by the human mind. However, the attempt must be made. As a machine, a robot will certainly be designed to offer guarantees of safety, at least as far as possible.”
Reading this with today’s eyes, Asimov’s words can be adopted as a manifesto for a Trustworthy AI. The quest for Trustworthy AI [7] is high on both the political and the research agenda , and it actually constitutes TAILOR’s first research objective H1: develop the foundations for trustworthy AI. It is concerned with designing and developing AI systems that incorporate the safeguards that make them trustworthy, and respectful of human agency and expectations. Not only the mechanisms to maximize benefits, but also those for minimizing harm. TAILOR focuses on the technical research needed to achieve Trustworthy AI while striving for establishing a continuous interdisciplinary dialogue for investigating the methods and methodologies to fully realize Trustworthy AI. To this aim, TAILOR has identified 6 fundamental dimensions to focus on, i.e., D1) explainability, D2) safety, D3) fairness, D4) accountability and reproducibility, D5) privacy and D6) sustainability. These dimensions are covered in detail in the Ambition section.
Learning, reasoning and optimization are the common mathematical and algorithmic foundations on which artificial intelligence and its applications rest. It is therefore surprising that they have so far been tackled mostly independently of one another, giving rise to quite different models studied in quite separated communities. AI has focussed on reasoning for a very long time, and has contributed numerous effective techniques and formalisms for representing knowledge and inference. Recent breakthroughs in machine learning, and in particular in deep learning, have, however, revolutionized AI and provide solutions to many hard problems in perception and beyond. However, this has also created the false impression that AI is just learning, not to say deep learning, and that data is all one needs to solve AI. This rests on the assumption that provided with sufficient data any complex model can be learned. Well-known drawbacks are that the required amounts of data are not always available, that only black-box models are learned, that they provide little or no explanation, and that they do not lend themselves for complex reasoning. On the other hand, a lot of research in machine reasoning has given the impression that sufficient knowledge and fast inference suffices to solve the AI problem. The well-known drawbacks are that knowledge is hard to represent, and even harder to acquire. The models are however more white-box and explainable. Today there is a growing awareness that learning, reasoning and optimization are all needed and actually need to be integrated [8]. For instance, Yann LeCun phrased it as: “Though perception `really works', what is still missing is reasoning” in PCMagazine [9] and the Director of Darpa, Steven Walker [10], states that “Today, machines lack contextual reasoning capabilities and their training must cover every eventuality – which is not only costly – but ultimately impossible... We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them. TAILOR’s second research objective H2 is therefore to tightly integrate learning, reasoning and optimization. It will realize this by focusing on the integration of different paradigms and representations for AI.
TAILOR’s two research objectives H1 and H2 are tightly connected to one another, as achieving Trustworthy AI is not possible without an integrated approach to learning, reasoning and optimisation, and, vice versa, AI frameworks that integrate these allow to address many of the requirements imposed by Trustworthy AI. Therefore, TAILOR takes a synergistic approach to addressing them.
TAILOR will pursue these two objectives into more specific and more tangible contexts, each of which corresponds to an area of research where Europe is at the forefront of research. They are also concerned with contexts where automation or autonomy is involved, which implies that trustworthy AI is required: V1) using AI to act autonomously in an environment, V2) using AI agents to act and learn in a society, i.e., to communicate, collaborate, negotiate and understand with other agents and humans, and V3 to democratize and automate AI systems, i.e., how to enable people with limited expertise to build, deploy, and maintain high-quality AI systems. This explains the structure of the research program. We have two horizontal WPs, which correspond to the two research questions H1 and H2, we have three vertical WPs which correspond to the contexts V1, V2 and V3. All vertical WPs will address both Trustworthy AI aspects, as well as learning, reasoning and optimization. To guarantee synergies between Trustworthy AI and the other WPs, each WP will have at least one of the dimensions of Trustworthy AI to focus on. This is detailed in the Table in Section1.4.2.2. Notice that the vertical V3 on automated AI is very relevant to the technical WPs 4-6, as these WPs are all concerned with AI techniques, and therefore there will be ample opportunities for interactions.
This also implise that each scientific WP 4, 5, 6 and 7 will have a “joint” tasks with WP 3 on the dimensions they share. Furthermore, the scientific research WPs 3 to 7 will, of course, also interact with WP 2 Roadmap and WP 8 Industry, Innovation and Transfer program and network. As already indicated under Instrument 1 they will have two standard tasks. One on fostering their subcommunities and one on synergies with WP 2 and 8.
References:
[7] The HLEG AI Ethics Guidelines
[8] Darwiche, A. CACM Vol 61. 2018; Lake, B.M et al.; Behavioral and Brain Sciences, Vol 40, 2017; Geffner, H. IJCAI 17; 9 https://www.pcmag.com/article/357463/yann-lecun-discusses-the-power-limits-of-deep-learning
[10] https://www.darpa.mil/news-events/2018-09-07