The last decade has seen a dramatic improvement in the capabilities of machine learning methods and their areas of application have exploded, impacting fields from medical imaging diagnosis and algorithmic trading, to product recommendations and molecular biology. At the same time, the increasing complexity of these models-mostly based on deep artificial neural networks-has rendered them less interpretable: it is difficult to understand what input features, and to what extent, are responsible to produce a certain output.
This mini symposium brings together researchers working on computer science, statistics and related areas, to discuss the fundamental underpinnings of explainable machine learning. These presentations will present the latest results on the mathematical formulation, analysis and guarantees for interpretable predictors.
Stanford
Berkeley
Ludwig Maximilian
University of Munich
University of Southern California, Marshall School of Business
Google Research, India
Technion
Johns Hopkins University
Memorial Sloan Kettering Cancer Center
Our mini-symposium is committed to a harassment-free experience for everyone, regardless of gender, gender identity and expression, age, sexual orientation, disability, physical appearance, race, ethnicity, religion (or lack thereof), or technology choices, and we abide by SIAM's code of conduct. We encourage all attendees to review this at the following link.