My mission is to build socially beneficial, robust, and theoretically substantiated machine learning systems.
I am a member of Pembroke College, funded by the Cambridge-Tübingen PhD fellowship with generous donations from Microsoft.
During my PhD I spent time at Deepmind and Amazon.
My background is in Physics and Mathematics. I was fortunate to spend time at Harvard, working with Paul Chesler and Wilke van der Schee, as well as at Stanford, working with William East and Tom Abel.
- 07/2019: Organizing a NeurIPS workshop on Human Centric Machine Learning. Submissions open soon!
- 06/2019: Improving consequential decisions under imperfect predictions @ KDD 2019 Workshop (DCCL)
- 06/2019: Convolutional neural networks: a magic bullet for gravitational-wave detection? @ Physical Review D
- 05/2019: The sensitivity of counterfactual fairness to unmeasured confounding @ UAI 2019
- 02/2019: 2nd edition of our book Quod erat knobelandum is now available at Springer [German]
- 12/2018: Organized the NeurIPS18 Workshop on Privacy Preserving Machine Learning
- 11/2018: Generalization in anti-causal learning @ NeurIPS18 Workshop (CRACT)
- 05/2018: Two papers @ ICML 2018
Selected Publications & Projects
Fair Decisions Despite Imperfect Predictions
NK, Manuel Gomez-Rodriguez, Bernhard Schölkopf, Krikamol Muandet, Isabel Valera
shorter version: Improving consequential decision making under imperfect predictions
KDD 2019 Workshop on Data Collection, Curation, and Labeling for Mining and Learning (DCCL)
News mentions and science communication
- MPI news: The Question is Why -- Algorithms learn a Sense of Fairness (2017)
- Financial Times: Finding a fair way to tame the bigoted bots (2018)
- New Scientist: How to stop artificial intelligence being so racist and sexist (2018)
- Second Nexus: Niki Kilbertus of Max Planck Institute for Intelligent Systems Has a Plan to Remove Bias From AIs (2018)
- The Alan Turing Institute: Can justice be blind when it comes to machine learning? (2018)
- MPI news: Blind Justice -- Researchers take new approach to machine learning fairness by applying privacy methods (2018)
- Max Planck Forschung: Auf Fairness programmiert (2019)
- Matt, Adrià, and Adrian talked about our work in three different episodes of The Talking Machines podcast (2018, 2019)
- Albert Einstein Institute (Potsdam-Golm, Germany): Machine Learning powered CBC Search
- Alan Turing Institute (London, UK): Fairness in Machine Learning
- Max Planck Institute for Software Systems (Saarbrücken, Germany): Fairness in Machine Learning
- Stanford University (CA, USA): Searching for Gravitational Waves with Machine Learning
- University of Regensburg (Regensburg, Germany): Fully Convolutional Networks for Gravitational Wave Searches
- Microsoft Research (Cambridge, UK): Learning Independent Causal Mechanisms
- Amazon Research (Cambridge, UK): Blind Justice: Fairness with Encrypted Sensitive Attributes
- Organizer of the NeurIPS 2018 workshop on Privacy Preserving Machine Learning
- Main organizer of the CamTue workshops: Mallorca 2017, Tenerife 2018
- I received a research grant from the Digital Impact Grant by Stanford PACS
- I thoroughly enjoy teaching, was active in the Schülerzirkel Mathematik in Regensburg, a TA for many courses in Math, Physics, and CS, lectured a semi-annual course on Computer- and Microcontroller-Technology, and co-lectured the course Green-IT at the summer academy 2016 in Leysin, organized by the German Academic Scholarship Foundation.
- I like building things, for example: Babyzen - A flexible sensor BoosterPack [codeproject article][short video][report (pdf)] or some things here.