Goal of the unit: The goal of this unit is to explore how FATE techniques might be put into action. Specifically, we explore how the concepts surrounding FATE are perceived by different stakeholders (e.g. developers, end-users).
Learning objectives:
To understand the importance of considering stakeholder perceptions of fairness in algorithmic systems
To become familiar with the techniques for studying stakeholder perceptions of FATE issues
To explore the findings to date surrounding different stakeholders’ perceptions (including end-users, future developers and the general public) on FATE and in particular, on fairness
To explore how we can raise different stakeholders’ awareness of FATE topics
To understand the role of different stakeholders in promoting FATE
Summary
Data-driven algorithms are becoming increasingly prevalent in everyday systems. For instance, they are used to filter job applications, credit requests, judicial and medical decisions, driving safety and many more high-risk applications, which might affect people’s access to resources and opportunities. It is now widely acknowledged that algorithmic systems do not always behave as they should, making decisions that may reproduce and/or amplify social stereotypes and inequalities. These systems are usually opaque (or what is often referred to in the literature as "black-box"), in that their reasoning and results are not always interpretable to their users, nor are they always fair towards users. Thus, it is important to examine how different stakeholders perceive Fairness, Accountability, Transparency and Ethics (FATE) in AI systems.
In Unit 4, we examine the approaches used to study stakeholders’ perceptions surrounding data-driven AI systems, including those of developers and end-users, as well as some key findings to date. The goal is to understand how we might raise their awareness of FATE and their own role in promoting more ethical development and use of AI systems.
In the first video lecture, Dr Styliani Kleanthous (Open University of Cyprus, CYENS Centre of Excellence) provides some insights on how future developers perceive algorithmic fairness in algorithmic decision-making. It looks into the role that academic education has to play in their understanding of the decision-making process, as well as their critical thinking on the factors and the decision-making process involved. In the second video lecture, Dr Kleanthous discusses the results of an empirical study, which investigates how students in fields adjacent to algorithm development perceive algorithmic decision making.
In the third video lecture, Prof. Tsvi Kuflik (The University of Haifa) discusses the need to understand stakeholders' fairness perceptions surrounding algorithmic systems, particularly those that are opaque. The lack of explanations about how a system works and how and why decisions/recommendations are made may bring with them the risk of undetected bias, discrimination and perception of unfairness. This talk discusses the challenges posed by opaque systems and the state of the art research towards developing solutions, all presented in a framework of a holistic model of algorithmic fairness. Prof. Kuflik also mentions that we can understand stakeholders' fairness perception by exploring what impacts users' fairness perception and finding ways to measure the perceived fairness.
In the final video lecture, Ms Casey Dugan (IBM Research) discusses the impact automation has on the workforce, but also on how users interact with intelligent systems and how such systems need to be designed to most effectively support their users. Dugan explores the theme of collaboration versus automation with AI through a number of research projects and scientific studies we have conducted in the context of AI Lifecycle Management, Automated Model Generation and Exploration for Data Scientists, Human-in-Loop Data Labeling, Explainability and Trust in AI systems, and AI-Infused Process Automation, as well as Generative Models and how they will fundamentally change how humans will interact with AI systems in the creative industries and for content generation. This talk gives a unique industry perspective on designing and building AI systems with users in mind.
Five suggested readings accompany the video lectures. The first article by Kasinidou et al. discusses future developers’ perception of fairness in algorithmic decision-making systems. In the second article, Holstein et al, provide a systematic investigation of industry teams’ challenges and needs for developing fairer algorithmic systems. Woodruff et al. in the third article provide a novel exploration of how traditionally marginalized populations perceive algorithmic fairness. They provide insights that highlight that the way companies handle algorithmic fairness interacts significantly with user trust. In the fourth article, Wang et al. provide insights on the perceptions of data scientists of automated AI technology. In the final article, Ashktorab et al. investigated human-AI collaboration in the context of a collaborative AI-driven game. They provide insights on users' perception towards their partner in the AI-driven game when they are told they are competing against a bot and when they are told they are competing against a human.
Kasinidou, M., Kleanthous, S., Barlas, P., & Otterbacher, J. (2021, March). I agree with the decision, but they didn't deserve this: Future Developers' Perception of Fairness in Algorithmic Decisions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 690-700). Link: https://dl.acm.org/doi/10.1145/3442188.3445931
Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019, May). Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1-16). Link: https://dl.acm.org/doi/10.1145/3290605.3300830
Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018, April). A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 chi conference on human factors in computing systems (pp. 1-14). Link: https://dl.acm.org/doi/10.1145/3173574.3174230
Wang, D., Weisz, J. D., Muller, M., Ram, P., Geyer, W., Dugan, C., ... & Gray, A. (2019). Human-ai collaboration in data science: Exploring data scientists' perceptions of automated ai. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-24. Link: https://dl.acm.org/doi/abs/10.1145/3359313
Ashktorab, Z., Liao, Q. V., Dugan, C., Johnson, J., Pan, Q., Zhang, W., ... & Campbell, M. (2020). Human-ai collaboration in a cooperative game setting: Measuring social perception and outcomes. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-20. Link: https://dl.acm.org/doi/10.1145/3415167
For this unit, there is available one activity, which will enable you to explore how FATE techniques might be put into action.
You can find the activity's description and a submission form here.
By taking this Quiz you will be able to assess the knowledge your gain from this Unit.
You will get feedback immediately via Google Forms, once your responses are submitted.