Schedule

Schedule - July 26th 2021

13:00 - 13:15

Welcome

Bernhard Schölkopf

Bernhard Schölkopf's scientific interests are in machine learning and causal inference. He has applied his methods to a number of different fields, ranging from biomedical problems to computational photography and astronomy. Bernhard has researched at AT&T Bell Labs, at GMD FIRST, Berlin, and at Microsoft Research Cambridge, UK, before becoming a Max Planck director in 2001. He is a member of the German Academy of Sciences (Leopoldina), has (co-)received the J.K. Aggarwal Prize of the International Association for Pattern Recognition, the Academy Prize of the Berlin-Brandenburg Academy of Sciences and Humanities, the Royal Society Milner Award, the Leibniz Award, the Koerber European Science Prize, the BBVA Foundation Frontiers of Knowledge Award, and is an Amazon Distinguished Scholar. He is Fellow of the ACM and of the CIFAR Program "Learning in Machines and Brains", and holds a Professorship at ETH Zurich.

Bernhard co-founded the series of Machine Learning Summer Schools, and currently acts as co-editor-in-chief for the Journal of Machine Learning Research, an early development in open access and today the field's flagship journal.

13:15- 14:00

Matt J. Kusner

Privacy & Security

Causality is Not Useful for Privacy

Causal modelling has seen a rise in attention from the machine learning community recently due to its ability to formalize invariances in data. This has been useful for improving model generalization, making models adapt to new data domains, and learning fair and explainable models. A natural question is if such invariance is useful for ensuring privacy of data and models: given that certain notions of privacy require invariances to changes in data (e.g., differential privacy), does causality buy us privacy? In this talk I argue that it does not: for the case of differential privacy we require a fundamentally different notion of invariance than the invariances described by causal modelling. Further, there are attacks that are not protected by differential privacy, which are potentially strengthened by the use of causal models.

Matt Kusner is an associate professor in machine learning at University College London. His work aims to design simple machine learning models tailored to the constraints of the problem at hand, particularly in causal inference, algorithmic fairness, secure/private learning, and molecular/materials design. After getting his PhD in Computer Science from Washington University in St. Louis in 2016, Matt was a part of the first cohort of research fellows at the Alan Turing Institute in London. Before joining UCL, he was an associate professor at the University of Oxford and tutorial fellow at Jesus College.

14:00 - 14:45

Isabel Valera

Fairness

Causality and fairness in ML: (a few) promises and challenges

Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is also a scholar of the European Laboratory for Learning and Intelligent Systems (ELLIS). Prior to this, she was an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany). She has held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK). Her research focuses on developing machine learning methods that are flexible, robust, interpretable and fair to analyze real-world data.


14:45 - 15:30

Jonas Peters

Robustness

Testing under distributional shifts and robustness in reinforcement

learning

It has been suggested to use heterogeneity in the training data to find prediction models that are robust with respect to changes in the underlying distribution. Statistical testing under distributional shifts may help us to apply such ideas in the field of reinforcement learning. In this talk, we introduce testing under distributional shifts and propose a general resampling-based testing procedure that comes with provable theoretical guarantees. The framework further allows us to tackle a diverse set of problems in reinforcement learning, causal inference, and covariate shift. This is joint work with Nikolaj Thams, Sorawit Saengkyongam and Niklas Pfister.


Jonas Peters is a professor in statistics at the Department of Mathematical Sciences at the University of Copenhagen. Previously, he has worked at the Max-Planck-Institute for Intelligent Systems in Tuebingen and at the Seminar for Statistics, ETH Zurich. He studied Mathematics at the University of Heidelberg and the University of Cambridge. In his research, Jonas is interested in inferring causal relationships from different types of data and in building statistical methods that are robust with respect to distributional shifts. He seeks to combine theory, methodology, and applications (e.g., in Earth system science or biology). His methodological work relates to areas such as computational statistics, causal inference, graphical models, high-dimensional statistics, and statistical testing.


15:30 - 15:45

Break

15:45 - 16:30

Cynthia Rudin

Explainability

Almost Matching Exactly for Interpretable Causal Inference

I will present a matching framework for causal inference in the potential outcomes setting called Almost Matching Exactly. This framework has several important elements: (1) Its algorithms create matched groups that are interpretable. The goal is to match treatment and control units on as many covariates as possible, or "almost exactly." (2) Its algorithms create accurate estimates of individual treatment effects. This is because we use machine learning on a separate training set to learn which features are important for matching. The key constraint is that units are always matched on a set of covariates that together can predict the outcome well. (3) Our methods are fast and scalable. In summary, these methods rival black box machine learning methods in their estimation accuracy but have the benefit of being interpretable and easier to troubleshoot. Our lab website is here:

https://almost-matching-exactly.github.io


Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.

16:30 - 17:15

Issa Kohler-Hausmann

Accountability

Some Thoughts on Causality, Social Kinds, and Identifying Legal-Normative Concepts With Causal Concepts

Issa Kohler-Hausmann is Professor of Law at Yale Law School and Associate Professor of Sociology at Yale. Born and raised in Milwaukee, Wisconsin, she holds a Ph.D. from New York University in sociology, a J.D. from Yale Law School, and a B.A. from University of Wisconsin-Madison. Her award-winning book Misdemeanorland: Criminal Courts and Social Control in an Age of Broken Windows Policing (Princeton, 2018) is a mixed method multi-year study of New York City’s lower criminal courts in the era of mass misdemeanor arrests. Her current research addresses the methodological and theoretical issues entailed in stating and proving discrimination and equal protection claims. Admitted to practice in New York State, Eastern and Southern Districts of New York, and the Western District of Wisconsin, Kohler-Hausmann maintains an active pro bono legal practice, currently with a concentration in parole release for persons serving life sentences for crimes committed as juveniles.

17:15 - 17:30

Break

17:30 - 19:00

Panel

Adrian Weller

Moderator

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for data science and AI, and is also a Turing AI Fellow leading work on trustworthy Machine Learning (ML). He is a Principal Research Fellow in ML at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. He is co-director of the European Laboratory for Learning and Intelligent Systems (ELLIS) programme on Human-centric Machine Learning, and a member of the World Economic Forum (WEF) Global AI Council. Previously, Adrian held senior roles in finance.


Ricardo Silva

Fairness

Ricardo Silva is a Professor of Statistical Machine Learning and Data Science at the Department of Statistical Science, UCL. He is also a Faculty Fellowship at the Alan Turing Institute. Ricardo obtained a PhD in Machine Learning from Carnegie Mellon University, 2005, followed by postdoctoral positions at the Gatsby Unit and at the Statistical Laboratory, University of Cambridge. His main interests are on causal inference, graphical models, and probabilistic machine learning. His research has received funding from organisations such as EPSRC, Innovate UK, the Office of Naval Research, Winton Research and Adobe Research. Ricardo has also served in the senior program committee of several top machine learning conferences, including acting as a Senior Area Chair at the NeurIPS and ICML conferences and being a Program Chair and Conference Chair for the Uncertainty in Artificial Intelligence conference.


Been Kim

Explainability

Been Kim is a staff research scientist at Google Brain. Her research focuses on improving interpretability in machine learning: not only by building interpretability methods but also challenging them for their validity. She gave a talk at the G20 meeting in Argentina in 2019. Her work TCAV received UNESCO Netexplo award, was featured at Google I/O 19' and in Brian Christian's book on "The Alignment Problem". Been has given keynote at ECML 2020, tutorials on interpretability at ICML, University of Toronto, CVPR and at Lawrence Berkeley National Laboratory. She was a co-workshop Chair ICLR 2019, and has been an (senior) area chair at conferences including NeurIPS, ICML, ICLR, and AISTATS. She received her PhD. from MIT.


Kun Zhang

Robustness

Kun Zhang is an associate professor of philosophy and an affiliate faculty in the machine learning department of Carnegie Mellon University. He has been actively developing methods for automated causal discovery from various kinds of data and investigating machine learning problems including transfer learning and representation learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for conferences in machine learning or artificial intelligence, including NeurIPS, ICML, UAI, IJCAI, AISTATS, and ICLR, and has co-organized a number of conferences or workshops to foster interdisciplinary exploration of causality.


James F. Woodward

Accountability and Explainability

James Woodward is a Distinguished Professor in the Department of History and Philosophy of Science, University of Pittsburgh. He works on causal inference and causal explanation, among other topics. He is a fellow of the American Academy of Arts and Sciences and a former president of the Philosophy of Science Association. His book, Making Things Happen: A Theory of Causal Explanation (2003) won the 2005 Lakatos award in philosophy of science. His book Causation with a Human Face: Normative Theory and Descriptive Psychology will be published by Oxford University Press this fall.


Tobias Gerstenberg

Accountability

Tobias Gerstenberg is an Assistant Professor of Psychology at Stanford University. He leads the Causality in Cognition Lab (CICL, http://cicl.stanford.edu). The CICL studies the role of causality in people's understanding of the world, and of each other. Professor Gerstenberg's research is highly interdisciplinary. It combines ideas from philosophy, linguistics, computer science, and the legal sciences to better understand higher-level cognitive phenomena such as causal inference and moral judgment. The CICL's research uses a variety of methods that include computational modeling, large-scale online experiments, developmental studies with children, as well as eye-tracking experiments with adults. Professor Gerstenberg's work has appeared in top journals including Psychological Review, Journal of Experimental Psychology: General, Psychological Science, Cognitive Psychology, Cognition, and Cognitive Science.


19:00 - 19:05

Closing Remarks

Krishna Gummadi