Michal Moshkovitz
What keeps me up at night these days is building machine learning models that are trustworthy and reliable. I specifically focus on explainable and interpretable machine learning for unsupervised, supervised, and reinforcement learning. I'm thinking about:
How can we design explanation methods with guarantees? (e.g., interpretable clustering)
How can we develop models that simultaneously satisfy several trustworthy goals (e.g., interpretability and robustness)
How can we evaluate different explanation methods?
I also have other works, exploring how different constraints affect learning:
What can be learned with bounded memory?
What are the implications of online decision-making?
Now you can find me at Bosch center for AI. Before that, I was a postdoc at Tel-Aviv University, hosted by Prof. Yishay Mansour. And before that I was a postdoctoral fellow at the Qualcomm Institute at the University of California, San Diego. I did my Ph.D. at the Edmond & Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Israel. I obtained M.Sc. in Computer Science, M.Sc. in Computational Neuroscience, and B.A. in Computer Science.
I was an intern at the Machine Learning for Healthcare and Life Sciences group, IBM Research and the Foundations of Machine Learning group, Google.
Contact: michal do t moshkovitz at mail d ot huji d o t ac dot il