Invited Speaker
Invited Speaker
Prof. Kees van Berkel
Affiliation: Institute for Logic and Computation, TU Wien
Speech title: AI Alignment and Normative Reasoning
ABSTRACT
Artificial Intelligence (AI) applications fundamentally impact individuals, the environment, and society as a whole. Many applications of this transformative technology pose ethical challenges for developers, policy-makers, and society. Ethics and AI meet in various ways. In this talk, we identify three central pillars: i) novel ethical concerns specific to AI; ii) AI-tailored policy-making; and iii) implementation of formal reasoning with norms and values. Particular emphasis is put on the third, due to the increasing awareness that AI systems must be aligned with human values, ethics, and laws. Besides determining what these values must be, the immediate question here is, how do values and norms influence formal reasoning processes? One of the key challenges for such reasoning is the fact that norms and values are highly conflict-sensitive. Early formal studies of normative reasoning (often umbrellaed under the name “deontic logic”) emerged in the 1950s, whereas Knowledge Representation approaches in the 2000s led to the introduction of defeasible normative reasoning systems adopting conflict resolution mechanisms. Whereas traditional methods in the field have focused on showing that certain obligations/rights/permissions follow from a given normative knowledge base, developments in AI shifted focus to explainability, additionally requiring proof of why certain conclusions follow. Concerning the latter, recent work will be discussed that leverages computational argumentation and dialogue models to provide explanatory normative reasoning, also identifying several key future research challenges.
SHORT BIO
Kees van Berkel is an Assistant Professor of AI Ethics at the Institute for Logic and Computation at TU Wien. He has a background in practical philosophy (philosophy of agency, ethics, and meta-ethics) as well as logic and computer science. His core research lies in the intersection of philosophy, ethics, and symbolic AI (especially knowledge representation and reasoning). It includes the development of logical methods for reasoning with normative systems, the study of norm explanations in AI through formal dialogue models, the logical and philosophical analysis of meta-ethical principles, and the modeling of conflict-resolution methods for norm and value conflicts. A shared characteristic of these topics is the interdisciplinarity of the research, addressing problems and questions that require close collaboration between experts from various fields, including the humanities, law, computer science, and mathematics. As of 2024, Kees is part of the cluster of excellence program Bilateral AI (research module “Ethical AI Systems”), investigating AI alignment and Explainable AI, and is co-coordinator of the special interest group “AI Ethics” at the Center for Artificial Intelligence and Machine Learning (CAIML).