Machine learning (ML) is integrated into our society, it is present in the judicial, health, transportation, and financial systems. As the integration increases, the necessity of ML transparency increases. The fields of explainable and interpretable ML attempt to add transparency to ML: either by adding explanations to a given black-box ML model or by building a model which is interpretable and self-explanatory.
Despite the importance of explainability and interpretability, their foundations are missing. Basic questions are left unanswered: How to define explainability and interpretability? Is there a tradeoff between performance and interpretability? How to evaluate the quality of explanation? In this talk we start answering these questions in the realm of supervised, unsupervised, and reinforcement learning.
Michal is a research scientist at Bosch center for AI and at Tel-Aviv University, hosted by Yishay Mansour, currently as a visiting researcher and previously as a postdoc. Prior she was a postdoctoral fellow at the Qualcomm Institute of the University of California San Diego. Her interests lie in the foundations of AI, and in the last three years she has been focused on developing the mathematical foundations of explainable machine learning. Michal received her Ph.D. from the Hebrew University and an MSc from Tel-Aviv University. During her Ph.D., Michal interned at the Machine Learning for Healthcare and Life Sciences group of IBM Research and the Foundations of Machine Learning group of Google. Michal has been selected as a 2021 EECS MIT Rising Star, the recipient of the Anita Borg scholarship from Google and the Hoffman scholarship from the Hebrew University.