Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics
Title: Do Simpler Machine Learning Models Exist and How Can We Find Them?
Abstract:
While the trend in machine learning has tended towards building more complicated (black box) models, such models are not as useful for high stakes decisions - black box models have led to mistakes in bail and parole decisions in criminal justice, flawed models in healthcare, and inexplicable loan decisions in finance. Simpler, interpretable models would be better. Thus, we consider questions that diametrically oppose the trend in the field: for which types of datasets would we expect to get simpler models at the same level of accuracy as black box models? If such simpler-yet-accurate models exist, how can we use optimization to find these simpler models? In this talk, I present an easy calculation to check for the possibility of a simpler (yet accurate) model before computing one. This calculation indicates that simpler-but-accurate models do exist in practice more often than you might think. Also, some types of these simple models are (surprisingly) small enough that they can be memorized or printed on an index card.
This is joint work with many wonderful students including Lesia Semenova, Chudi Zhong, Zhi Chen, Rui Xin, Jiachang Liu, Hayden McTavish, Jay Wang, Reto Achermann, Ilias Karimalis, Jacques Chen as well as senior collaborators Margo Seltzer, Ron Parr, Brandon Westover, Aaron Struck, Berk Ustun, and Takuya Takagi.
John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge. Fellow at The Alan Turing Institute in London. Director of the van der Schaar Lab, and founder and director of the Cambridge Centre for AI in Medicine (CCAIM).
Title: Interpretable Machine Learning for Time-Series Forecasting
Abstract:
In this talk I will present two approaches for making machine learning for time-series forecasting interpretable and actionable.
The first approach uses post-hoc interpretability to turn black-box machine learning into interpretable insights. I will describe here the first methods for feature-based and for example-based interpretability of machine learning for time-series forecasting - Dynamask (ICML 2021) and Simplex (NeurIPS 2021), respectively.
The second approach directly discovers interpretable closed-form ordinary differential equations (ODEs) from data using machine learning. I will describe here the first automated tool to distill closed-form ODEs from observed trajectories, D-CODE (ICLR 2022), which we believe will accelerate the modeling process of dynamical systems in finance, in healthcare, and beyond.
AI Research Director
Executive Director for Data and Analytics & Chief Data Officer
Professor in Computational Logic at the Department of Computing, Imperial College London and Royal Academy of Engineering / J.P. Morgan Research Chair in Argumentation-based Interactive Explainable AI