Head of the Research Department on the Foundations of Systems AI at the German Research Center for Artificial Intelligence (DFKI), Darmstadt, Germany
Professor, Computer Science, TU Darmstadt University, Germany
Title
Reasonable AI: Bridging "Intuition" and (Causal) Reasoning
Abstract
Causality and logical reasoning are cornerstones of intelligent behavior, yet our most powerful AI systems often lack genuine causal logical understanding. Large Language Models (LLMs), despite their fluency, act more like causal parrots—reciting statistical patterns of causal language rather than reasoning logically and causally. In this talk, I explore the emerging concept of meta-causality, which suggests that LLMs may encode surface-level correlations about causal structures, without ever modeling cause and effect themselves. To move beyond this limitation, we must integrate the intuitive, fast heuristics of System 1 with the deliberate, structured reasoning of System 2, as described by Daniel Kahneman. I will illustrate how approaches such as differentiable logic, causal interventions, and neuro-symbolic reasoning can help us bridge the gap between data-driven learning and reasoning, paving the way toward "reasonable AI". This vision calls for systems that not only learn from data but also understand, explain, and act on the world in causally meaningful ways.
Bio
Dr. Kristian Kersting is a Full Professor (W3) at the Computer Science Department of the TU Darmstadt University, Germany. He is the head of the Artificial Intelligence and Machine Learning (AIML) lab, a member of the Centre for Cognitive Science, a faculty of the ELLIS Unit Darmstadt, and the founding co-director of the Hessian Center for Artificial Intelligence (hessian.ai). After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI) as well as deep (probabilistic) programming and learning. Kristian has published over 200 peer-reviewed technical papers, co-authored a Springer book on Statistical Relational AI and co-edited a MIT Press book on Probabilistic Lifted Inference. Kristian is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the European Association for Artificial Intelligence (EurAI), a Fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS), a Fellow of the Asia-Pacific Artificial Intelligence Association (AAIA), and a key supporter of the Confederation of Laboratories for Artificial Intelligence in Europe (CAIRNE). He received the Inaugural German AI Award (Deutscher KI-Preis) 2019, accompanied by a prize of EURO100.000, several best paper and outstanding reviewer awards, a Fraunhofer Attract research grant with a budget of 2.5 Million Euro over 5 years (2008-2013), and the EurAI (formerly ECCAI) Dissertation Award 2006 for the best PhD thesis in AI. for the best Ph.D. thesis in the field of Artificial Intelligence in Europe.
Acting Department Chair, Director of Center for Integrative Artificial Intelligence (CIAI), and Visiting Professor of Machine Learning, MBZUAI, Abu Dhabi, UAE
Associate Professor, Department of Philosphy, Carnegie Mellon University, Pittsburgh, USA
Title
Causal Representation Learning and Generative AI
Abstract
Causality is a fundamental notion in science, engineering, and even in machine learning. Uncovering the causal process behind observed data can naturally help answer 'why' and 'how' questions, inform optimal decisions, and achieve adaptive prediction. In many scenarios, observed variables (such as image pixels and questionnaire results) are often reflections of the underlying causal variables rather than being causal variables themselves. Causal representation learning aims to reveal the underlying hidden causal variables and their relations. In this talk, we show how the modularity property of causal systems makes it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. We demonstrate how identifiable causal representation learning can naturally benefit generative AI, with image generation, image editing, and text generation as particular examples.
Bio
Kun Zhang is an associate professor at Carnegie Mellon University (CMU), and he is also a visiting professor in the machine learning department at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He has been actively developing methods for automated causal discovery from various kinds of data and investigating machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023. He currently serves as an associate editor of JASA, JMLR, IEEE Transactions on Pattern Analysis and Machine Intelligence, ACM Computing Surveys, etc.