Goal of the unit: The goal of this unit is to explain the terms transparency and explainability in data-driven systems and to promote the use of explainability methods for providing transparency in algorithmic systems.
Learning objectives:
To become familiar with the terms explainability, interpretability, and transparency.
To explore explainability methods for machine learning models.
To become familiar with personalized explainability methods.
To learn how the transparency of a system might affect the user interaction with the system.
Summary
An important aspect to consider when developing an algorithmic system is to ensure the transparency of the system by providing a reliable explanation about its process and outcome. Explainability and interpretability in algorithmic systems such as Machine Learning (ML) aim to ensure compatibility with social values such as fairness, privacy, causality and trust. A system’s users need to be provided with justification on the decisions made, and not simply with explanations of the outcome. This raises the importance of developing appropriate explainability methods for designing more interpretable algorithmic systems.
In this unit, we will explore the concepts of Explainable AI, interpretable ML and transparency in decision-making and in general algorithmic systems.
In the first video lecture, Prof. Min Kyung Lee (The University of Texas at Austin), presents the empirical findings on people’s trust and fairness around algorithms that make managerial and resource allocation decisions.
In the second video lecture by Prof. Rob Procter (The University of Warwick), explainable-AI is presented in contexts that involve ad-hoc collaboration where agency and accountability in decision-making are achieved and sustained socially and interactionally.
The third video lecture is an introductory video to “Explainable AI” presented by Dr David Aha, (Acting Director, Navy Center for Applied Research in AI, U.S. Naval Research Laboratory).
Three suggested readings accompany the above video lectures. The first recommendation for reading is the proceedings of the IUI 2021 workshop that explore issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. It also presents approaches for mitigating bias without having access to the internal process of the system such as awareness, data provenance, and validation. The second article by Guidotti: “A survey of methods of explaining black-box models'' is a literature review of the explainability methods proposed for explaining the outcome of machine learning models, especially the non-transparent models (or “black-box” models as referred in the literature) such as deep learning. The third article by Eslami et al. entitled “User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms”, presents how users perceive and interact with potentially biased and deceptive opaque algorithms and what factors are associated with these perceptions.
Smith-Renner, A. M., Loizou, S. K., Dodge, J., Dugan, C., Lee, M. K., Lim, B. Y., ... & Stumpf, S. (2021, April). TExSS: Transparency and Explanations in Smart Systems. In 26th International Conference on Intelligent User Interfaces (pp. 24-25). Link: https://dl.acm.org/doi/10.1145/3397482.3450705
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42. Link: https://dl.acm.org/doi/10.1145/3236009
Eslami, M., Vaccaro, K., Lee, M. K., Elazari Bar On, A., Gilbert, E., & Karahalios, K. (2019, May). User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Link: https://dl.acm.org/doi/10.1145/3290605.3300724
For this unit, there is available one activity, which will enable you to explore the terms transparency and explainability in an algorithmic system.
You can find the activity's description and a submission form here.
By taking this Quiz you will be able to assess the knowledge your gain from this Unit.
You will get feedback immediately via Google Forms, once your responses are submitted.