Senior Research Scientist, Sony AI (leads the Scientific Discovery Flagship Project). His work sits at the intersection of AI and cognitive science, spanning computational creativity, neurosymbolic methods, and trustworthy AI. Previously, he held roles across industry and academia, including Head of Strategic AI at DEKRA DIGITAL, CTO at Neurocat, Chief Science Officer at Telefónica’s Alpha Health, and Lecturer/Assistant Professor in Data Science at City, University of London. He earned a PhD in Cognitive Science from the University of Osnabrück and an MSc in Mathematics from FAU Erlangen–Nuremberg (with studies at the University of Zaragoza). Selected recent work includes research on knowledge graphs, explainability benchmarks, and machine learning security.
TALK ABSTRACT
Following in the footsteps of Renaissance cabinets of curiosities, this talk will shine light on several closely connected, but today often separately considered topics: explanations and the notion of explainability in AI/ML; structured knowledge representations and knowledge graphs; and the human nature of the users of most AI/ML systems. Starting from my personal take on explanations and some of their key properties, I will continue with considerations regarding how to test the (in)efficacy of XAI methods. Finally, I want to have a look at a recent application example from AI for Science, using ML over knowledge graphs to generate scientific (proto-)hypotheses, and the role explainability plays in making the system outputs palatable to researchers.